Dec 10 18:51:13 localhost kernel: Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025
Dec 10 18:51:13 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 10 18:51:13 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 10 18:51:13 localhost kernel: BIOS-provided physical RAM map:
Dec 10 18:51:13 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 10 18:51:13 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 10 18:51:13 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 10 18:51:13 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 10 18:51:13 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 10 18:51:13 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 10 18:51:13 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 10 18:51:13 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 10 18:51:13 localhost kernel: NX (Execute Disable) protection: active
Dec 10 18:51:13 localhost kernel: APIC: Static calls initialized
Dec 10 18:51:13 localhost kernel: SMBIOS 2.8 present.
Dec 10 18:51:13 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 10 18:51:13 localhost kernel: Hypervisor detected: KVM
Dec 10 18:51:13 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 10 18:51:13 localhost kernel: kvm-clock: using sched offset of 3743999210 cycles
Dec 10 18:51:13 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 10 18:51:13 localhost kernel: tsc: Detected 2799.998 MHz processor
Dec 10 18:51:13 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 10 18:51:13 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 10 18:51:13 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 10 18:51:13 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 10 18:51:13 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 10 18:51:13 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 10 18:51:13 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 10 18:51:13 localhost kernel: Using GB pages for direct mapping
Dec 10 18:51:13 localhost kernel: RAMDISK: [mem 0x2d46a000-0x32a2cfff]
Dec 10 18:51:13 localhost kernel: ACPI: Early table checksum verification disabled
Dec 10 18:51:13 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 10 18:51:13 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 10 18:51:13 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 10 18:51:13 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 10 18:51:13 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 10 18:51:13 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 10 18:51:13 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 10 18:51:13 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 10 18:51:13 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 10 18:51:13 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 10 18:51:13 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 10 18:51:13 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 10 18:51:13 localhost kernel: No NUMA configuration found
Dec 10 18:51:13 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 10 18:51:13 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec 10 18:51:13 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 10 18:51:13 localhost kernel: Zone ranges:
Dec 10 18:51:13 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 10 18:51:13 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 10 18:51:13 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 10 18:51:13 localhost kernel:   Device   empty
Dec 10 18:51:13 localhost kernel: Movable zone start for each node
Dec 10 18:51:13 localhost kernel: Early memory node ranges
Dec 10 18:51:13 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 10 18:51:13 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 10 18:51:13 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 10 18:51:13 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 10 18:51:13 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 10 18:51:13 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 10 18:51:13 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 10 18:51:13 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 10 18:51:13 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 10 18:51:13 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 10 18:51:13 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 10 18:51:13 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 10 18:51:13 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 10 18:51:13 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 10 18:51:13 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 10 18:51:13 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 10 18:51:13 localhost kernel: TSC deadline timer available
Dec 10 18:51:13 localhost kernel: CPU topo: Max. logical packages:   8
Dec 10 18:51:13 localhost kernel: CPU topo: Max. logical dies:       8
Dec 10 18:51:13 localhost kernel: CPU topo: Max. dies per package:   1
Dec 10 18:51:13 localhost kernel: CPU topo: Max. threads per core:   1
Dec 10 18:51:13 localhost kernel: CPU topo: Num. cores per package:     1
Dec 10 18:51:13 localhost kernel: CPU topo: Num. threads per package:   1
Dec 10 18:51:13 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 10 18:51:13 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 10 18:51:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 10 18:51:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 10 18:51:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 10 18:51:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 10 18:51:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 10 18:51:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 10 18:51:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 10 18:51:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 10 18:51:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 10 18:51:13 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 10 18:51:13 localhost kernel: Booting paravirtualized kernel on KVM
Dec 10 18:51:13 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 10 18:51:13 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 10 18:51:13 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 10 18:51:13 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 10 18:51:13 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 10 18:51:13 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 10 18:51:13 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 10 18:51:13 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space.
Dec 10 18:51:13 localhost kernel: random: crng init done
Dec 10 18:51:13 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 10 18:51:13 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 10 18:51:13 localhost kernel: Fallback order for Node 0: 0 
Dec 10 18:51:13 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 10 18:51:13 localhost kernel: Policy zone: Normal
Dec 10 18:51:13 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 10 18:51:13 localhost kernel: software IO TLB: area num 8.
Dec 10 18:51:13 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 10 18:51:13 localhost kernel: ftrace: allocating 49357 entries in 193 pages
Dec 10 18:51:13 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 10 18:51:13 localhost kernel: Dynamic Preempt: voluntary
Dec 10 18:51:13 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 10 18:51:13 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 10 18:51:13 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 10 18:51:13 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 10 18:51:13 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 10 18:51:13 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 10 18:51:13 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 10 18:51:13 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 10 18:51:13 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 10 18:51:13 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 10 18:51:13 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 10 18:51:13 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 10 18:51:13 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 10 18:51:13 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 10 18:51:13 localhost kernel: Console: colour VGA+ 80x25
Dec 10 18:51:13 localhost kernel: printk: console [ttyS0] enabled
Dec 10 18:51:13 localhost kernel: ACPI: Core revision 20230331
Dec 10 18:51:13 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 10 18:51:13 localhost kernel: x2apic enabled
Dec 10 18:51:13 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 10 18:51:13 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 10 18:51:13 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec 10 18:51:13 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 10 18:51:13 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 10 18:51:13 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 10 18:51:13 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 10 18:51:13 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 10 18:51:13 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 10 18:51:13 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 10 18:51:13 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 10 18:51:13 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 10 18:51:13 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 10 18:51:13 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 10 18:51:13 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 10 18:51:13 localhost kernel: x86/bugs: return thunk changed
Dec 10 18:51:13 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 10 18:51:13 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 10 18:51:13 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 10 18:51:13 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 10 18:51:13 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 10 18:51:13 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 10 18:51:13 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 10 18:51:13 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 10 18:51:13 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 10 18:51:13 localhost kernel: landlock: Up and running.
Dec 10 18:51:13 localhost kernel: Yama: becoming mindful.
Dec 10 18:51:13 localhost kernel: SELinux:  Initializing.
Dec 10 18:51:13 localhost kernel: LSM support for eBPF active
Dec 10 18:51:13 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 10 18:51:13 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 10 18:51:13 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 10 18:51:13 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 10 18:51:13 localhost kernel: ... version:                0
Dec 10 18:51:13 localhost kernel: ... bit width:              48
Dec 10 18:51:13 localhost kernel: ... generic registers:      6
Dec 10 18:51:13 localhost kernel: ... value mask:             0000ffffffffffff
Dec 10 18:51:13 localhost kernel: ... max period:             00007fffffffffff
Dec 10 18:51:13 localhost kernel: ... fixed-purpose events:   0
Dec 10 18:51:13 localhost kernel: ... event mask:             000000000000003f
Dec 10 18:51:13 localhost kernel: signal: max sigframe size: 1776
Dec 10 18:51:13 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 10 18:51:13 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 10 18:51:13 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 10 18:51:13 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 10 18:51:13 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 10 18:51:13 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 10 18:51:13 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec 10 18:51:13 localhost kernel: node 0 deferred pages initialised in 21ms
Dec 10 18:51:13 localhost kernel: Memory: 7764040K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 618220K reserved, 0K cma-reserved)
Dec 10 18:51:13 localhost kernel: devtmpfs: initialized
Dec 10 18:51:13 localhost kernel: x86/mm: Memory block size: 128MB
Dec 10 18:51:13 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 10 18:51:13 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 10 18:51:13 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 10 18:51:13 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 10 18:51:13 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 10 18:51:13 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 10 18:51:13 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 10 18:51:13 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 10 18:51:13 localhost kernel: audit: type=2000 audit(1765392671.430:1): state=initialized audit_enabled=0 res=1
Dec 10 18:51:13 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 10 18:51:13 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 10 18:51:13 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 10 18:51:13 localhost kernel: cpuidle: using governor menu
Dec 10 18:51:13 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 10 18:51:13 localhost kernel: PCI: Using configuration type 1 for base access
Dec 10 18:51:13 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 10 18:51:13 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 10 18:51:13 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 10 18:51:13 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 10 18:51:13 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 10 18:51:13 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 10 18:51:13 localhost kernel: Demotion targets for Node 0: null
Dec 10 18:51:13 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 10 18:51:13 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 10 18:51:13 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 10 18:51:13 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 10 18:51:13 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 10 18:51:13 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 10 18:51:13 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 10 18:51:13 localhost kernel: ACPI: Interpreter enabled
Dec 10 18:51:13 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 10 18:51:13 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 10 18:51:13 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 10 18:51:13 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 10 18:51:13 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 10 18:51:13 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 10 18:51:13 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [3] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [4] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [5] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [6] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [7] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [8] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [9] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [10] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [11] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [12] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [13] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [14] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [15] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [16] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [17] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [18] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [19] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [20] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [21] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [22] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [23] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [24] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [25] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [26] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [27] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [28] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [29] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [30] registered
Dec 10 18:51:13 localhost kernel: acpiphp: Slot [31] registered
Dec 10 18:51:13 localhost kernel: PCI host bridge to bus 0000:00
Dec 10 18:51:13 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 10 18:51:13 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 10 18:51:13 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 10 18:51:13 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 10 18:51:13 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 10 18:51:13 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 10 18:51:13 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 10 18:51:13 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 10 18:51:13 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 10 18:51:13 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 10 18:51:13 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 10 18:51:13 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 10 18:51:13 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 10 18:51:13 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 10 18:51:13 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 10 18:51:13 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 10 18:51:13 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 10 18:51:13 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 10 18:51:13 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 10 18:51:13 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 10 18:51:13 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 10 18:51:13 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 10 18:51:13 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 10 18:51:13 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 10 18:51:13 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 10 18:51:13 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 10 18:51:13 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 10 18:51:13 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 10 18:51:13 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 10 18:51:13 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 10 18:51:13 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 10 18:51:13 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 10 18:51:13 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 10 18:51:13 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 10 18:51:13 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 10 18:51:13 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 10 18:51:13 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 10 18:51:13 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 10 18:51:13 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 10 18:51:13 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 10 18:51:13 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 10 18:51:13 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 10 18:51:13 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 10 18:51:13 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 10 18:51:13 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 10 18:51:13 localhost kernel: iommu: Default domain type: Translated
Dec 10 18:51:13 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 10 18:51:13 localhost kernel: SCSI subsystem initialized
Dec 10 18:51:13 localhost kernel: ACPI: bus type USB registered
Dec 10 18:51:13 localhost kernel: usbcore: registered new interface driver usbfs
Dec 10 18:51:13 localhost kernel: usbcore: registered new interface driver hub
Dec 10 18:51:13 localhost kernel: usbcore: registered new device driver usb
Dec 10 18:51:13 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 10 18:51:13 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 10 18:51:13 localhost kernel: PTP clock support registered
Dec 10 18:51:13 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 10 18:51:13 localhost kernel: NetLabel: Initializing
Dec 10 18:51:13 localhost kernel: NetLabel:  domain hash size = 128
Dec 10 18:51:13 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 10 18:51:13 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 10 18:51:13 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 10 18:51:13 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 10 18:51:13 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 10 18:51:13 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 10 18:51:13 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 10 18:51:13 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 10 18:51:13 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 10 18:51:13 localhost kernel: vgaarb: loaded
Dec 10 18:51:13 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 10 18:51:13 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 10 18:51:13 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 10 18:51:13 localhost kernel: pnp: PnP ACPI init
Dec 10 18:51:13 localhost kernel: pnp 00:03: [dma 2]
Dec 10 18:51:13 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 10 18:51:13 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 10 18:51:13 localhost kernel: NET: Registered PF_INET protocol family
Dec 10 18:51:13 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 10 18:51:13 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 10 18:51:13 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 10 18:51:13 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 10 18:51:13 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 10 18:51:13 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 10 18:51:13 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 10 18:51:13 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 10 18:51:13 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 10 18:51:13 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 10 18:51:13 localhost kernel: NET: Registered PF_XDP protocol family
Dec 10 18:51:13 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 10 18:51:13 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 10 18:51:13 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 10 18:51:13 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 10 18:51:13 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 10 18:51:13 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 10 18:51:13 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 10 18:51:13 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 10 18:51:13 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 72449 usecs
Dec 10 18:51:13 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 10 18:51:13 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 10 18:51:13 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 10 18:51:13 localhost kernel: ACPI: bus type thunderbolt registered
Dec 10 18:51:13 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 10 18:51:13 localhost kernel: Initialise system trusted keyrings
Dec 10 18:51:13 localhost kernel: Key type blacklist registered
Dec 10 18:51:13 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 10 18:51:13 localhost kernel: zbud: loaded
Dec 10 18:51:13 localhost kernel: integrity: Platform Keyring initialized
Dec 10 18:51:13 localhost kernel: integrity: Machine keyring initialized
Dec 10 18:51:13 localhost kernel: Freeing initrd memory: 87820K
Dec 10 18:51:13 localhost kernel: NET: Registered PF_ALG protocol family
Dec 10 18:51:13 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 10 18:51:13 localhost kernel: Key type asymmetric registered
Dec 10 18:51:13 localhost kernel: Asymmetric key parser 'x509' registered
Dec 10 18:51:13 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 10 18:51:13 localhost kernel: io scheduler mq-deadline registered
Dec 10 18:51:13 localhost kernel: io scheduler kyber registered
Dec 10 18:51:13 localhost kernel: io scheduler bfq registered
Dec 10 18:51:13 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 10 18:51:13 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 10 18:51:13 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 10 18:51:13 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 10 18:51:13 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 10 18:51:13 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 10 18:51:13 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 10 18:51:13 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 10 18:51:13 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 10 18:51:13 localhost kernel: Non-volatile memory driver v1.3
Dec 10 18:51:13 localhost kernel: rdac: device handler registered
Dec 10 18:51:13 localhost kernel: hp_sw: device handler registered
Dec 10 18:51:13 localhost kernel: emc: device handler registered
Dec 10 18:51:13 localhost kernel: alua: device handler registered
Dec 10 18:51:13 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 10 18:51:13 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 10 18:51:13 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 10 18:51:13 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 10 18:51:13 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 10 18:51:13 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 10 18:51:13 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 10 18:51:13 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd
Dec 10 18:51:13 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 10 18:51:13 localhost kernel: hub 1-0:1.0: USB hub found
Dec 10 18:51:13 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 10 18:51:13 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 10 18:51:13 localhost kernel: usbserial: USB Serial support registered for generic
Dec 10 18:51:13 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 10 18:51:13 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 10 18:51:13 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 10 18:51:13 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 10 18:51:13 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 10 18:51:13 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 10 18:51:13 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 10 18:51:13 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-10T18:51:12 UTC (1765392672)
Dec 10 18:51:13 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 10 18:51:13 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 10 18:51:13 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 10 18:51:13 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 10 18:51:13 localhost kernel: usbcore: registered new interface driver usbhid
Dec 10 18:51:13 localhost kernel: usbhid: USB HID core driver
Dec 10 18:51:13 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 10 18:51:13 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 10 18:51:13 localhost kernel: Initializing XFRM netlink socket
Dec 10 18:51:13 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 10 18:51:13 localhost kernel: Segment Routing with IPv6
Dec 10 18:51:13 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 10 18:51:13 localhost kernel: mpls_gso: MPLS GSO support
Dec 10 18:51:13 localhost kernel: IPI shorthand broadcast: enabled
Dec 10 18:51:13 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 10 18:51:13 localhost kernel: AES CTR mode by8 optimization enabled
Dec 10 18:51:13 localhost kernel: sched_clock: Marking stable (1648015494, 158308003)->(1949492732, -143169235)
Dec 10 18:51:13 localhost kernel: registered taskstats version 1
Dec 10 18:51:13 localhost kernel: Loading compiled-in X.509 certificates
Dec 10 18:51:13 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 10 18:51:13 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 10 18:51:13 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 10 18:51:13 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 10 18:51:13 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 10 18:51:13 localhost kernel: Demotion targets for Node 0: null
Dec 10 18:51:13 localhost kernel: page_owner is disabled
Dec 10 18:51:13 localhost kernel: Key type .fscrypt registered
Dec 10 18:51:13 localhost kernel: Key type fscrypt-provisioning registered
Dec 10 18:51:13 localhost kernel: Key type big_key registered
Dec 10 18:51:13 localhost kernel: Key type encrypted registered
Dec 10 18:51:13 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 10 18:51:13 localhost kernel: Loading compiled-in module X.509 certificates
Dec 10 18:51:13 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 10 18:51:13 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 10 18:51:13 localhost kernel: ima: No architecture policies found
Dec 10 18:51:13 localhost kernel: evm: Initialising EVM extended attributes:
Dec 10 18:51:13 localhost kernel: evm: security.selinux
Dec 10 18:51:13 localhost kernel: evm: security.SMACK64 (disabled)
Dec 10 18:51:13 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 10 18:51:13 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 10 18:51:13 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 10 18:51:13 localhost kernel: evm: security.apparmor (disabled)
Dec 10 18:51:13 localhost kernel: evm: security.ima
Dec 10 18:51:13 localhost kernel: evm: security.capability
Dec 10 18:51:13 localhost kernel: evm: HMAC attrs: 0x1
Dec 10 18:51:13 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 10 18:51:13 localhost kernel: Running certificate verification RSA selftest
Dec 10 18:51:13 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 10 18:51:13 localhost kernel: Running certificate verification ECDSA selftest
Dec 10 18:51:13 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 10 18:51:13 localhost kernel: clk: Disabling unused clocks
Dec 10 18:51:13 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 10 18:51:13 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec 10 18:51:13 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 10 18:51:13 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Dec 10 18:51:13 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 10 18:51:13 localhost kernel: Run /init as init process
Dec 10 18:51:13 localhost kernel:   with arguments:
Dec 10 18:51:13 localhost kernel:     /init
Dec 10 18:51:13 localhost kernel:   with environment:
Dec 10 18:51:13 localhost kernel:     HOME=/
Dec 10 18:51:13 localhost kernel:     TERM=linux
Dec 10 18:51:13 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64
Dec 10 18:51:13 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 10 18:51:13 localhost systemd[1]: Detected virtualization kvm.
Dec 10 18:51:13 localhost systemd[1]: Detected architecture x86-64.
Dec 10 18:51:13 localhost systemd[1]: Running in initrd.
Dec 10 18:51:13 localhost systemd[1]: No hostname configured, using default hostname.
Dec 10 18:51:13 localhost systemd[1]: Hostname set to <localhost>.
Dec 10 18:51:13 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 10 18:51:13 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 10 18:51:13 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 10 18:51:13 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 10 18:51:13 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 10 18:51:13 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 10 18:51:13 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 10 18:51:13 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 10 18:51:13 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 10 18:51:13 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 10 18:51:13 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 10 18:51:13 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 10 18:51:13 localhost systemd[1]: Reached target Local File Systems.
Dec 10 18:51:13 localhost systemd[1]: Reached target Path Units.
Dec 10 18:51:13 localhost systemd[1]: Reached target Slice Units.
Dec 10 18:51:13 localhost systemd[1]: Reached target Swaps.
Dec 10 18:51:13 localhost systemd[1]: Reached target Timer Units.
Dec 10 18:51:13 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 10 18:51:13 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 10 18:51:13 localhost systemd[1]: Listening on Journal Socket.
Dec 10 18:51:13 localhost systemd[1]: Listening on udev Control Socket.
Dec 10 18:51:13 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 10 18:51:13 localhost systemd[1]: Reached target Socket Units.
Dec 10 18:51:13 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 10 18:51:13 localhost systemd[1]: Starting Journal Service...
Dec 10 18:51:13 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 10 18:51:13 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 10 18:51:13 localhost systemd[1]: Starting Create System Users...
Dec 10 18:51:13 localhost systemd[1]: Starting Setup Virtual Console...
Dec 10 18:51:13 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 10 18:51:13 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 10 18:51:13 localhost systemd[1]: Finished Create System Users.
Dec 10 18:51:13 localhost systemd-journald[306]: Journal started
Dec 10 18:51:13 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/be15d2aa13e44b81819a074ddfd2ac46) is 8.0M, max 153.6M, 145.6M free.
Dec 10 18:51:13 localhost systemd-sysusers[309]: Creating group 'users' with GID 100.
Dec 10 18:51:13 localhost systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Dec 10 18:51:13 localhost systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 10 18:51:13 localhost systemd[1]: Started Journal Service.
Dec 10 18:51:13 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 10 18:51:13 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 10 18:51:13 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 10 18:51:13 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 10 18:51:13 localhost systemd[1]: Finished Setup Virtual Console.
Dec 10 18:51:13 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 10 18:51:13 localhost systemd[1]: Starting dracut cmdline hook...
Dec 10 18:51:13 localhost dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Dec 10 18:51:13 localhost dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 10 18:51:13 localhost systemd[1]: Finished dracut cmdline hook.
Dec 10 18:51:13 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 10 18:51:13 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 10 18:51:13 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 10 18:51:13 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 10 18:51:13 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 10 18:51:13 localhost kernel: RPC: Registered udp transport module.
Dec 10 18:51:13 localhost kernel: RPC: Registered tcp transport module.
Dec 10 18:51:13 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 10 18:51:13 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 10 18:51:13 localhost rpc.statd[442]: Version 2.5.4 starting
Dec 10 18:51:13 localhost rpc.statd[442]: Initializing NSM state
Dec 10 18:51:13 localhost rpc.idmapd[447]: Setting log level to 0
Dec 10 18:51:13 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 10 18:51:13 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 10 18:51:13 localhost systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Dec 10 18:51:13 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 10 18:51:13 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 10 18:51:13 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 10 18:51:13 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 10 18:51:13 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 10 18:51:13 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 10 18:51:13 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 10 18:51:13 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 10 18:51:13 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 10 18:51:13 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 10 18:51:13 localhost systemd[1]: Reached target Network.
Dec 10 18:51:13 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 10 18:51:13 localhost systemd[1]: Starting dracut initqueue hook...
Dec 10 18:51:13 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 10 18:51:14 localhost kernel: libata version 3.00 loaded.
Dec 10 18:51:14 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 10 18:51:14 localhost kernel: scsi host0: ata_piix
Dec 10 18:51:14 localhost kernel: scsi host1: ata_piix
Dec 10 18:51:14 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 10 18:51:14 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 10 18:51:14 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 10 18:51:14 localhost kernel:  vda: vda1
Dec 10 18:51:14 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 10 18:51:14 localhost kernel: ata1: found unknown device (class 0)
Dec 10 18:51:14 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 10 18:51:14 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 10 18:51:14 localhost systemd-udevd[494]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 18:51:14 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 10 18:51:14 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 10 18:51:14 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 10 18:51:14 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 10 18:51:14 localhost systemd[1]: Found device /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 10 18:51:14 localhost systemd[1]: Reached target Initrd Root Device.
Dec 10 18:51:14 localhost systemd[1]: Reached target System Initialization.
Dec 10 18:51:14 localhost systemd[1]: Reached target Basic System.
Dec 10 18:51:14 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 10 18:51:14 localhost systemd[1]: Finished dracut initqueue hook.
Dec 10 18:51:14 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 10 18:51:14 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 10 18:51:14 localhost systemd[1]: Reached target Remote File Systems.
Dec 10 18:51:14 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 10 18:51:14 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 10 18:51:14 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266...
Dec 10 18:51:14 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Dec 10 18:51:14 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 10 18:51:14 localhost systemd[1]: Mounting /sysroot...
Dec 10 18:51:14 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 10 18:51:14 localhost kernel: XFS (vda1): Mounting V5 Filesystem cbdedf45-ed1d-4952-82a8-33a12c0ba266
Dec 10 18:51:14 localhost kernel: XFS (vda1): Ending clean mount
Dec 10 18:51:14 localhost systemd[1]: Mounted /sysroot.
Dec 10 18:51:14 localhost systemd[1]: Reached target Initrd Root File System.
Dec 10 18:51:15 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 10 18:51:15 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 10 18:51:15 localhost systemd[1]: Reached target Initrd File Systems.
Dec 10 18:51:15 localhost systemd[1]: Reached target Initrd Default Target.
Dec 10 18:51:15 localhost systemd[1]: Starting dracut mount hook...
Dec 10 18:51:15 localhost systemd[1]: Finished dracut mount hook.
Dec 10 18:51:15 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 10 18:51:15 localhost rpc.idmapd[447]: exiting on signal 15
Dec 10 18:51:15 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 10 18:51:15 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 10 18:51:15 localhost systemd[1]: Stopped target Network.
Dec 10 18:51:15 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 10 18:51:15 localhost systemd[1]: Stopped target Timer Units.
Dec 10 18:51:15 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 10 18:51:15 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 10 18:51:15 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 10 18:51:15 localhost systemd[1]: Stopped target Basic System.
Dec 10 18:51:15 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 10 18:51:15 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 10 18:51:15 localhost systemd[1]: Stopped target Path Units.
Dec 10 18:51:15 localhost systemd[1]: Stopped target Remote File Systems.
Dec 10 18:51:15 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 10 18:51:15 localhost systemd[1]: Stopped target Slice Units.
Dec 10 18:51:15 localhost systemd[1]: Stopped target Socket Units.
Dec 10 18:51:15 localhost systemd[1]: Stopped target System Initialization.
Dec 10 18:51:15 localhost systemd[1]: Stopped target Local File Systems.
Dec 10 18:51:15 localhost systemd[1]: Stopped target Swaps.
Dec 10 18:51:15 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped dracut mount hook.
Dec 10 18:51:15 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 10 18:51:15 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 10 18:51:15 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 10 18:51:15 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 10 18:51:15 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 10 18:51:15 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 10 18:51:15 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 10 18:51:15 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 10 18:51:15 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 10 18:51:15 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 10 18:51:15 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 10 18:51:15 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 10 18:51:15 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Closed udev Control Socket.
Dec 10 18:51:15 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Closed udev Kernel Socket.
Dec 10 18:51:15 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 10 18:51:15 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 10 18:51:15 localhost systemd[1]: Starting Cleanup udev Database...
Dec 10 18:51:15 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 10 18:51:15 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 10 18:51:15 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Stopped Create System Users.
Dec 10 18:51:15 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 10 18:51:15 localhost systemd[1]: Finished Cleanup udev Database.
Dec 10 18:51:15 localhost systemd[1]: Reached target Switch Root.
Dec 10 18:51:15 localhost systemd[1]: Starting Switch Root...
Dec 10 18:51:15 localhost systemd[1]: Switching root.
Dec 10 18:51:15 localhost systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Dec 10 18:51:15 localhost systemd-journald[306]: Journal stopped
Dec 10 18:51:16 localhost kernel: audit: type=1404 audit(1765392675.399:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 10 18:51:16 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 10 18:51:16 localhost kernel: SELinux:  policy capability open_perms=1
Dec 10 18:51:16 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 10 18:51:16 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 10 18:51:16 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 10 18:51:16 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 10 18:51:16 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 10 18:51:16 localhost kernel: audit: type=1403 audit(1765392675.537:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 10 18:51:16 localhost systemd[1]: Successfully loaded SELinux policy in 141.305ms.
Dec 10 18:51:16 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.771ms.
Dec 10 18:51:16 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 10 18:51:16 localhost systemd[1]: Detected virtualization kvm.
Dec 10 18:51:16 localhost systemd[1]: Detected architecture x86-64.
Dec 10 18:51:16 localhost systemd-rc-local-generator[636]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 18:51:16 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 10 18:51:16 localhost systemd[1]: Stopped Switch Root.
Dec 10 18:51:16 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 10 18:51:16 localhost systemd[1]: Created slice Slice /system/getty.
Dec 10 18:51:16 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 10 18:51:16 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 10 18:51:16 localhost systemd[1]: Created slice User and Session Slice.
Dec 10 18:51:16 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 10 18:51:16 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 10 18:51:16 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 10 18:51:16 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 10 18:51:16 localhost systemd[1]: Stopped target Switch Root.
Dec 10 18:51:16 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 10 18:51:16 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 10 18:51:16 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 10 18:51:16 localhost systemd[1]: Reached target Path Units.
Dec 10 18:51:16 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 10 18:51:16 localhost systemd[1]: Reached target Slice Units.
Dec 10 18:51:16 localhost systemd[1]: Reached target Swaps.
Dec 10 18:51:16 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 10 18:51:16 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 10 18:51:16 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 10 18:51:16 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 10 18:51:16 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 10 18:51:16 localhost systemd[1]: Listening on udev Control Socket.
Dec 10 18:51:16 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 10 18:51:16 localhost systemd[1]: Mounting Huge Pages File System...
Dec 10 18:51:16 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 10 18:51:16 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 10 18:51:16 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 10 18:51:16 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 10 18:51:16 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 10 18:51:16 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 10 18:51:16 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 10 18:51:16 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 10 18:51:16 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 10 18:51:16 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 10 18:51:16 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 10 18:51:16 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 10 18:51:16 localhost systemd[1]: Stopped Journal Service.
Dec 10 18:51:16 localhost kernel: fuse: init (API version 7.37)
Dec 10 18:51:16 localhost systemd[1]: Starting Journal Service...
Dec 10 18:51:16 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 10 18:51:16 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 10 18:51:16 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 10 18:51:16 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 10 18:51:16 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 10 18:51:16 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 10 18:51:16 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 10 18:51:16 localhost systemd-journald[677]: Journal started
Dec 10 18:51:16 localhost systemd-journald[677]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 10 18:51:15 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 10 18:51:15 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 10 18:51:16 localhost systemd[1]: Started Journal Service.
Dec 10 18:51:16 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 10 18:51:16 localhost systemd[1]: Mounted Huge Pages File System.
Dec 10 18:51:16 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 10 18:51:16 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 10 18:51:16 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 10 18:51:16 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 10 18:51:16 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 10 18:51:16 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 10 18:51:16 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 10 18:51:16 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 10 18:51:16 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 10 18:51:16 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 10 18:51:16 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 10 18:51:16 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 10 18:51:16 localhost kernel: ACPI: bus type drm_connector registered
Dec 10 18:51:16 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 10 18:51:16 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 10 18:51:16 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 10 18:51:16 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 10 18:51:16 localhost systemd[1]: Mounting FUSE Control File System...
Dec 10 18:51:16 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 10 18:51:16 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 10 18:51:16 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 10 18:51:16 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 10 18:51:16 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 10 18:51:16 localhost systemd-journald[677]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 10 18:51:16 localhost systemd-journald[677]: Received client request to flush runtime journal.
Dec 10 18:51:16 localhost systemd[1]: Starting Create System Users...
Dec 10 18:51:16 localhost systemd[1]: Mounted FUSE Control File System.
Dec 10 18:51:16 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 10 18:51:16 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 10 18:51:16 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 10 18:51:16 localhost systemd[1]: Finished Create System Users.
Dec 10 18:51:16 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 10 18:51:16 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 10 18:51:16 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 10 18:51:16 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 10 18:51:16 localhost systemd[1]: Reached target Local File Systems.
Dec 10 18:51:16 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 10 18:51:16 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 10 18:51:16 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 10 18:51:16 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 10 18:51:16 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 10 18:51:16 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 10 18:51:16 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 10 18:51:16 localhost bootctl[694]: Couldn't find EFI system partition, skipping.
Dec 10 18:51:16 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 10 18:51:16 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 10 18:51:16 localhost systemd[1]: Starting Security Auditing Service...
Dec 10 18:51:16 localhost systemd[1]: Starting RPC Bind...
Dec 10 18:51:16 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 10 18:51:16 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 10 18:51:16 localhost auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 10 18:51:16 localhost auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 10 18:51:16 localhost systemd[1]: Started RPC Bind.
Dec 10 18:51:16 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 10 18:51:16 localhost augenrules[705]: /sbin/augenrules: No change
Dec 10 18:51:16 localhost augenrules[721]: No rules
Dec 10 18:51:16 localhost augenrules[721]: enabled 1
Dec 10 18:51:16 localhost augenrules[721]: failure 1
Dec 10 18:51:16 localhost augenrules[721]: pid 700
Dec 10 18:51:16 localhost augenrules[721]: rate_limit 0
Dec 10 18:51:16 localhost augenrules[721]: backlog_limit 8192
Dec 10 18:51:16 localhost augenrules[721]: lost 0
Dec 10 18:51:16 localhost augenrules[721]: backlog 2
Dec 10 18:51:16 localhost augenrules[721]: backlog_wait_time 60000
Dec 10 18:51:16 localhost augenrules[721]: backlog_wait_time_actual 0
Dec 10 18:51:16 localhost augenrules[721]: enabled 1
Dec 10 18:51:16 localhost augenrules[721]: failure 1
Dec 10 18:51:16 localhost augenrules[721]: pid 700
Dec 10 18:51:16 localhost augenrules[721]: rate_limit 0
Dec 10 18:51:16 localhost augenrules[721]: backlog_limit 8192
Dec 10 18:51:16 localhost augenrules[721]: lost 0
Dec 10 18:51:16 localhost augenrules[721]: backlog 1
Dec 10 18:51:16 localhost augenrules[721]: backlog_wait_time 60000
Dec 10 18:51:16 localhost augenrules[721]: backlog_wait_time_actual 0
Dec 10 18:51:16 localhost augenrules[721]: enabled 1
Dec 10 18:51:16 localhost augenrules[721]: failure 1
Dec 10 18:51:16 localhost augenrules[721]: pid 700
Dec 10 18:51:16 localhost augenrules[721]: rate_limit 0
Dec 10 18:51:16 localhost augenrules[721]: backlog_limit 8192
Dec 10 18:51:16 localhost augenrules[721]: lost 0
Dec 10 18:51:16 localhost augenrules[721]: backlog 0
Dec 10 18:51:16 localhost augenrules[721]: backlog_wait_time 60000
Dec 10 18:51:16 localhost augenrules[721]: backlog_wait_time_actual 0
Dec 10 18:51:16 localhost systemd[1]: Started Security Auditing Service.
Dec 10 18:51:16 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 10 18:51:16 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 10 18:51:16 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 10 18:51:16 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 10 18:51:16 localhost systemd[1]: Starting Update is Completed...
Dec 10 18:51:16 localhost systemd[1]: Finished Update is Completed.
Dec 10 18:51:16 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Dec 10 18:51:16 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 10 18:51:16 localhost systemd[1]: Reached target System Initialization.
Dec 10 18:51:16 localhost systemd[1]: Started dnf makecache --timer.
Dec 10 18:51:16 localhost systemd[1]: Started Daily rotation of log files.
Dec 10 18:51:16 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 10 18:51:16 localhost systemd[1]: Reached target Timer Units.
Dec 10 18:51:16 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 10 18:51:16 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 10 18:51:16 localhost systemd[1]: Reached target Socket Units.
Dec 10 18:51:16 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 10 18:51:16 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 10 18:51:16 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 10 18:51:16 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 10 18:51:16 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 10 18:51:16 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 10 18:51:16 localhost systemd-udevd[733]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 18:51:16 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 10 18:51:16 localhost systemd[1]: Reached target Basic System.
Dec 10 18:51:16 localhost dbus-broker-lau[758]: Ready
Dec 10 18:51:16 localhost systemd[1]: Starting NTP client/server...
Dec 10 18:51:16 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 10 18:51:16 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 10 18:51:16 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 10 18:51:16 localhost systemd[1]: Started irqbalance daemon.
Dec 10 18:51:16 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 10 18:51:17 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 10 18:51:17 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 10 18:51:17 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 10 18:51:17 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 10 18:51:17 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 10 18:51:17 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 10 18:51:17 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 10 18:51:17 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 10 18:51:17 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 10 18:51:17 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 10 18:51:17 localhost chronyd[792]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 10 18:51:17 localhost chronyd[792]: Loaded 0 symmetric keys
Dec 10 18:51:17 localhost chronyd[792]: Using right/UTC timezone to obtain leap second data
Dec 10 18:51:17 localhost chronyd[792]: Loaded seccomp filter (level 2)
Dec 10 18:51:17 localhost systemd[1]: Starting User Login Management...
Dec 10 18:51:17 localhost systemd[1]: Started NTP client/server.
Dec 10 18:51:17 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 10 18:51:17 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 10 18:51:17 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 10 18:51:17 localhost systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 10 18:51:17 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 10 18:51:17 localhost systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 10 18:51:17 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 10 18:51:17 localhost kernel: kvm_amd: TSC scaling supported
Dec 10 18:51:17 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 10 18:51:17 localhost kernel: kvm_amd: Nested Paging enabled
Dec 10 18:51:17 localhost kernel: kvm_amd: LBR virtualization supported
Dec 10 18:51:17 localhost kernel: Console: switching to colour dummy device 80x25
Dec 10 18:51:17 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 10 18:51:17 localhost kernel: [drm] features: -context_init
Dec 10 18:51:17 localhost kernel: [drm] number of scanouts: 1
Dec 10 18:51:17 localhost kernel: [drm] number of cap sets: 0
Dec 10 18:51:17 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 10 18:51:17 localhost systemd-logind[789]: New seat seat0.
Dec 10 18:51:17 localhost systemd[1]: Started User Login Management.
Dec 10 18:51:17 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 10 18:51:17 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 10 18:51:17 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 10 18:51:17 localhost iptables.init[779]: iptables: Applying firewall rules: [  OK  ]
Dec 10 18:51:17 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 10 18:51:17 localhost cloud-init[838]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 10 Dec 2025 18:51:17 +0000. Up 6.64 seconds.
Dec 10 18:51:17 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 10 18:51:17 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 10 18:51:17 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp330fey15.mount: Deactivated successfully.
Dec 10 18:51:17 localhost systemd[1]: Starting Hostname Service...
Dec 10 18:51:17 localhost systemd[1]: Started Hostname Service.
Dec 10 18:51:17 np0005554310.novalocal systemd-hostnamed[852]: Hostname set to <np0005554310.novalocal> (static)
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Reached target Preparation for Network.
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Starting Network Manager...
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.0851] NetworkManager (version 1.54.2-1.el9) is starting... (boot:94b3788e-ef1c-48b5-bcf4-2732c1663990)
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.0857] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.0937] manager[0x555dddd40000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.0979] hostname: hostname: using hostnamed
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.0979] hostname: static hostname changed from (none) to "np0005554310.novalocal"
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.0983] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1084] manager[0x555dddd40000]: rfkill: Wi-Fi hardware radio set enabled
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1085] manager[0x555dddd40000]: rfkill: WWAN hardware radio set enabled
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1128] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1129] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1129] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1129] manager: Networking is enabled by state file
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1131] settings: Loaded settings plugin: keyfile (internal)
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1140] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1158] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1169] dhcp: init: Using DHCP client 'internal'
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1171] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1184] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1192] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1199] device (lo): Activation: starting connection 'lo' (f2373871-aaf0-4c91-b3c1-62ecfbed22d7)
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1209] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1212] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1241] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1246] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1249] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1250] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1252] device (eth0): carrier: link connected
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1256] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1262] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1268] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1272] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1273] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1274] manager: NetworkManager state is now CONNECTING
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1276] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1286] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1289] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1325] dhcp4 (eth0): state changed new lease, address=38.102.83.158
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1333] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1357] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Started Network Manager.
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Reached target Network.
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1709] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1711] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1717] device (lo): Activation: successful, device activated.
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1726] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1727] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1729] manager: NetworkManager state is now CONNECTED_SITE
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1731] device (eth0): Activation: successful, device activated.
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1735] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 10 18:51:18 np0005554310.novalocal NetworkManager[856]: <info>  [1765392678.1737] manager: startup complete
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Reached target NFS client services.
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Reached target Remote File Systems.
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 10 18:51:18 np0005554310.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 10 Dec 2025 18:51:18 +0000. Up 7.58 seconds.
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: |  eth0  | True |        38.102.83.158        | 255.255.255.0 | global | fa:16:3e:16:03:86 |
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: |  eth0  | True | fe80::f816:3eff:fe16:386/64 |       .       |  link  | fa:16:3e:16:03:86 |
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 10 18:51:18 np0005554310.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 10 18:51:19 np0005554310.novalocal useradd[986]: new group: name=cloud-user, GID=1001
Dec 10 18:51:19 np0005554310.novalocal useradd[986]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 10 18:51:19 np0005554310.novalocal useradd[986]: add 'cloud-user' to group 'adm'
Dec 10 18:51:19 np0005554310.novalocal useradd[986]: add 'cloud-user' to group 'systemd-journal'
Dec 10 18:51:19 np0005554310.novalocal useradd[986]: add 'cloud-user' to shadow group 'adm'
Dec 10 18:51:19 np0005554310.novalocal useradd[986]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: Generating public/private rsa key pair.
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: The key fingerprint is:
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: SHA256:Bc/ctMGMaY+ZieYN8CKPuNujgj7OyfVR5AGugtRMQYY root@np0005554310.novalocal
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: The key's randomart image is:
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: +---[RSA 3072]----+
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |  o=..  .  =o    |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: | E= . o  =+ooo   |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: | . o . = o=*o    |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |o   o + *.= .    |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |.. o + *So       |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |  o . o . .      |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |.  o .           |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |=.+.o .          |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |oB+o.o           |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: +----[SHA256]-----+
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: Generating public/private ecdsa key pair.
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: The key fingerprint is:
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: SHA256:uh2jJVy1YYNE0YlMsNVhCeq58QxX1215BxffevagvCA root@np0005554310.novalocal
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: The key's randomart image is:
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: +---[ECDSA 256]---+
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |      .=B*o+  ..o|
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |       =oo=  . ++|
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |      o . * . ..B|
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |     . . + =   oo|
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |      = S .   o o|
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |     . X   . . +.|
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |      = E . o   .|
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |       * + . .   |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |      o .   .    |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: +----[SHA256]-----+
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: Generating public/private ed25519 key pair.
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: The key fingerprint is:
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: SHA256:8NlEyaSo6dEfUWd1QJjNssctWtvwzCq9FYDnbkIgvkM root@np0005554310.novalocal
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: The key's randomart image is:
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: +--[ED25519 256]--+
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |         o+.oB+..|
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |       . +oo= o. |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |      o.o... * . |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |     +.o.=. + B .|
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |    + .ES .. = X |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |   . ...... o . *|
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |    .  o.  . + ..|
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |        .   + o. |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: |             ... |
Dec 10 18:51:19 np0005554310.novalocal cloud-init[919]: +----[SHA256]-----+
Dec 10 18:51:19 np0005554310.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 10 18:51:19 np0005554310.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 10 18:51:19 np0005554310.novalocal systemd[1]: Reached target Network is Online.
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Starting System Logging Service...
Dec 10 18:51:20 np0005554310.novalocal sm-notify[1002]: Version 2.5.4 starting
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Starting Permit User Sessions...
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Finished Permit User Sessions.
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Started Command Scheduler.
Dec 10 18:51:20 np0005554310.novalocal sshd[1004]: Server listening on 0.0.0.0 port 22.
Dec 10 18:51:20 np0005554310.novalocal sshd[1004]: Server listening on :: port 22.
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Started Getty on tty1.
Dec 10 18:51:20 np0005554310.novalocal crond[1007]: (CRON) STARTUP (1.5.7)
Dec 10 18:51:20 np0005554310.novalocal crond[1007]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 10 18:51:20 np0005554310.novalocal crond[1007]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 4% if used.)
Dec 10 18:51:20 np0005554310.novalocal crond[1007]: (CRON) INFO (running with inotify support)
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Reached target Login Prompts.
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 10 18:51:20 np0005554310.novalocal rsyslogd[1003]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1003" x-info="https://www.rsyslog.com"] start
Dec 10 18:51:20 np0005554310.novalocal rsyslogd[1003]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Started System Logging Service.
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Reached target Multi-User System.
Dec 10 18:51:20 np0005554310.novalocal sshd-session[1010]: Connection reset by 38.102.83.114 port 41344 [preauth]
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 10 18:51:20 np0005554310.novalocal sshd-session[1020]: Unable to negotiate with 38.102.83.114 port 41354: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 10 18:51:20 np0005554310.novalocal rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 18:51:20 np0005554310.novalocal sshd-session[1036]: Unable to negotiate with 38.102.83.114 port 41360: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 10 18:51:20 np0005554310.novalocal sshd-session[1045]: Unable to negotiate with 38.102.83.114 port 41364: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 10 18:51:20 np0005554310.novalocal sshd-session[1053]: Connection reset by 38.102.83.114 port 41368 [preauth]
Dec 10 18:51:20 np0005554310.novalocal sshd-session[1074]: Unable to negotiate with 38.102.83.114 port 41376: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 10 18:51:20 np0005554310.novalocal sshd-session[1078]: Unable to negotiate with 38.102.83.114 port 41388: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 10 18:51:20 np0005554310.novalocal kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Dec 10 18:51:20 np0005554310.novalocal kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-648.el9.x86_64kdump.img
Dec 10 18:51:20 np0005554310.novalocal sshd-session[1027]: Connection closed by 38.102.83.114 port 41356 [preauth]
Dec 10 18:51:20 np0005554310.novalocal sshd-session[1067]: Connection closed by 38.102.83.114 port 41372 [preauth]
Dec 10 18:51:20 np0005554310.novalocal cloud-init[1194]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 10 Dec 2025 18:51:20 +0000. Up 9.47 seconds.
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 10 18:51:20 np0005554310.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 10 18:51:20 np0005554310.novalocal dracut[1281]: dracut-057-102.git20250818.el9
Dec 10 18:51:20 np0005554310.novalocal dracut[1283]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-648.el9.x86_64kdump.img 5.14.0-648.el9.x86_64
Dec 10 18:51:20 np0005554310.novalocal cloud-init[1337]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 10 Dec 2025 18:51:20 +0000. Up 9.91 seconds.
Dec 10 18:51:20 np0005554310.novalocal cloud-init[1353]: #############################################################
Dec 10 18:51:20 np0005554310.novalocal cloud-init[1354]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 10 18:51:20 np0005554310.novalocal cloud-init[1358]: 256 SHA256:uh2jJVy1YYNE0YlMsNVhCeq58QxX1215BxffevagvCA root@np0005554310.novalocal (ECDSA)
Dec 10 18:51:20 np0005554310.novalocal cloud-init[1361]: 256 SHA256:8NlEyaSo6dEfUWd1QJjNssctWtvwzCq9FYDnbkIgvkM root@np0005554310.novalocal (ED25519)
Dec 10 18:51:20 np0005554310.novalocal cloud-init[1366]: 3072 SHA256:Bc/ctMGMaY+ZieYN8CKPuNujgj7OyfVR5AGugtRMQYY root@np0005554310.novalocal (RSA)
Dec 10 18:51:20 np0005554310.novalocal cloud-init[1367]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 10 18:51:20 np0005554310.novalocal cloud-init[1368]: #############################################################
Dec 10 18:51:21 np0005554310.novalocal cloud-init[1337]: Cloud-init v. 24.4-7.el9 finished at Wed, 10 Dec 2025 18:51:21 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.11 seconds
Dec 10 18:51:21 np0005554310.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 10 18:51:21 np0005554310.novalocal systemd[1]: Reached target Cloud-init target.
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 10 18:51:21 np0005554310.novalocal dracut[1283]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: memstrack is not available
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: memstrack is not available
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: *** Including module: systemd ***
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: *** Including module: fips ***
Dec 10 18:51:22 np0005554310.novalocal dracut[1283]: *** Including module: systemd-initrd ***
Dec 10 18:51:23 np0005554310.novalocal dracut[1283]: *** Including module: i18n ***
Dec 10 18:51:23 np0005554310.novalocal dracut[1283]: *** Including module: drm ***
Dec 10 18:51:23 np0005554310.novalocal dracut[1283]: *** Including module: prefixdevname ***
Dec 10 18:51:23 np0005554310.novalocal dracut[1283]: *** Including module: kernel-modules ***
Dec 10 18:51:23 np0005554310.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 10 18:51:24 np0005554310.novalocal dracut[1283]: *** Including module: kernel-modules-extra ***
Dec 10 18:51:24 np0005554310.novalocal dracut[1283]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 10 18:51:24 np0005554310.novalocal dracut[1283]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 10 18:51:24 np0005554310.novalocal dracut[1283]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 10 18:51:24 np0005554310.novalocal dracut[1283]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 10 18:51:24 np0005554310.novalocal dracut[1283]: *** Including module: qemu ***
Dec 10 18:51:24 np0005554310.novalocal dracut[1283]: *** Including module: fstab-sys ***
Dec 10 18:51:24 np0005554310.novalocal dracut[1283]: *** Including module: rootfs-block ***
Dec 10 18:51:24 np0005554310.novalocal dracut[1283]: *** Including module: terminfo ***
Dec 10 18:51:24 np0005554310.novalocal dracut[1283]: *** Including module: udev-rules ***
Dec 10 18:51:24 np0005554310.novalocal chronyd[792]: Selected source 206.108.0.133 (2.centos.pool.ntp.org)
Dec 10 18:51:24 np0005554310.novalocal chronyd[792]: System clock TAI offset set to 37 seconds
Dec 10 18:51:24 np0005554310.novalocal dracut[1283]: Skipping udev rule: 91-permissions.rules
Dec 10 18:51:24 np0005554310.novalocal dracut[1283]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 10 18:51:25 np0005554310.novalocal dracut[1283]: *** Including module: virtiofs ***
Dec 10 18:51:25 np0005554310.novalocal dracut[1283]: *** Including module: dracut-systemd ***
Dec 10 18:51:25 np0005554310.novalocal dracut[1283]: *** Including module: usrmount ***
Dec 10 18:51:25 np0005554310.novalocal dracut[1283]: *** Including module: base ***
Dec 10 18:51:25 np0005554310.novalocal dracut[1283]: *** Including module: fs-lib ***
Dec 10 18:51:25 np0005554310.novalocal dracut[1283]: *** Including module: kdumpbase ***
Dec 10 18:51:25 np0005554310.novalocal dracut[1283]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 10 18:51:25 np0005554310.novalocal dracut[1283]:   microcode_ctl module: mangling fw_dir
Dec 10 18:51:25 np0005554310.novalocal dracut[1283]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 10 18:51:25 np0005554310.novalocal dracut[1283]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 10 18:51:25 np0005554310.novalocal dracut[1283]:     microcode_ctl: configuration "intel" is ignored
Dec 10 18:51:25 np0005554310.novalocal dracut[1283]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 10 18:51:25 np0005554310.novalocal dracut[1283]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 10 18:51:25 np0005554310.novalocal dracut[1283]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]: *** Including module: openssl ***
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]: *** Including module: shutdown ***
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]: *** Including module: squash ***
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]: *** Including modules done ***
Dec 10 18:51:26 np0005554310.novalocal dracut[1283]: *** Installing kernel module dependencies ***
Dec 10 18:51:26 np0005554310.novalocal irqbalance[780]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 10 18:51:26 np0005554310.novalocal irqbalance[780]: IRQ 25 affinity is now unmanaged
Dec 10 18:51:26 np0005554310.novalocal irqbalance[780]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 10 18:51:26 np0005554310.novalocal irqbalance[780]: IRQ 31 affinity is now unmanaged
Dec 10 18:51:26 np0005554310.novalocal irqbalance[780]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 10 18:51:26 np0005554310.novalocal irqbalance[780]: IRQ 28 affinity is now unmanaged
Dec 10 18:51:26 np0005554310.novalocal irqbalance[780]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 10 18:51:26 np0005554310.novalocal irqbalance[780]: IRQ 32 affinity is now unmanaged
Dec 10 18:51:26 np0005554310.novalocal irqbalance[780]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 10 18:51:26 np0005554310.novalocal irqbalance[780]: IRQ 30 affinity is now unmanaged
Dec 10 18:51:26 np0005554310.novalocal irqbalance[780]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 10 18:51:26 np0005554310.novalocal irqbalance[780]: IRQ 29 affinity is now unmanaged
Dec 10 18:51:27 np0005554310.novalocal dracut[1283]: *** Installing kernel module dependencies done ***
Dec 10 18:51:27 np0005554310.novalocal dracut[1283]: *** Resolving executable dependencies ***
Dec 10 18:51:28 np0005554310.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 10 18:51:29 np0005554310.novalocal dracut[1283]: *** Resolving executable dependencies done ***
Dec 10 18:51:29 np0005554310.novalocal dracut[1283]: *** Generating early-microcode cpio image ***
Dec 10 18:51:29 np0005554310.novalocal dracut[1283]: *** Store current command line parameters ***
Dec 10 18:51:29 np0005554310.novalocal dracut[1283]: Stored kernel commandline:
Dec 10 18:51:29 np0005554310.novalocal dracut[1283]: No dracut internal kernel commandline stored in the initramfs
Dec 10 18:51:29 np0005554310.novalocal dracut[1283]: *** Install squash loader ***
Dec 10 18:51:30 np0005554310.novalocal dracut[1283]: *** Squashing the files inside the initramfs ***
Dec 10 18:51:31 np0005554310.novalocal dracut[1283]: *** Squashing the files inside the initramfs done ***
Dec 10 18:51:31 np0005554310.novalocal dracut[1283]: *** Creating image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' ***
Dec 10 18:51:31 np0005554310.novalocal dracut[1283]: *** Hardlinking files ***
Dec 10 18:51:31 np0005554310.novalocal dracut[1283]: Mode:           real
Dec 10 18:51:31 np0005554310.novalocal dracut[1283]: Files:          50
Dec 10 18:51:31 np0005554310.novalocal dracut[1283]: Linked:         0 files
Dec 10 18:51:31 np0005554310.novalocal dracut[1283]: Compared:       0 xattrs
Dec 10 18:51:31 np0005554310.novalocal dracut[1283]: Compared:       0 files
Dec 10 18:51:31 np0005554310.novalocal dracut[1283]: Saved:          0 B
Dec 10 18:51:31 np0005554310.novalocal dracut[1283]: Duration:       0.000804 seconds
Dec 10 18:51:31 np0005554310.novalocal dracut[1283]: *** Hardlinking files done ***
Dec 10 18:51:31 np0005554310.novalocal dracut[1283]: *** Creating initramfs image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' done ***
Dec 10 18:51:32 np0005554310.novalocal kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Dec 10 18:51:32 np0005554310.novalocal kdumpctl[1018]: kdump: Starting kdump: [OK]
Dec 10 18:51:32 np0005554310.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 10 18:51:32 np0005554310.novalocal systemd[1]: Startup finished in 2.004s (kernel) + 2.484s (initrd) + 16.968s (userspace) = 21.457s.
Dec 10 18:51:37 np0005554310.novalocal sshd-session[4292]: Accepted publickey for zuul from 38.102.83.114 port 46918 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 10 18:51:37 np0005554310.novalocal systemd-logind[789]: New session 1 of user zuul.
Dec 10 18:51:37 np0005554310.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 10 18:51:37 np0005554310.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 10 18:51:37 np0005554310.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 10 18:51:37 np0005554310.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 10 18:51:37 np0005554310.novalocal systemd[4296]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 18:51:38 np0005554310.novalocal systemd[4296]: Queued start job for default target Main User Target.
Dec 10 18:51:38 np0005554310.novalocal systemd[4296]: Created slice User Application Slice.
Dec 10 18:51:38 np0005554310.novalocal systemd[4296]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 10 18:51:38 np0005554310.novalocal systemd[4296]: Started Daily Cleanup of User's Temporary Directories.
Dec 10 18:51:38 np0005554310.novalocal systemd[4296]: Reached target Paths.
Dec 10 18:51:38 np0005554310.novalocal systemd[4296]: Reached target Timers.
Dec 10 18:51:38 np0005554310.novalocal systemd[4296]: Starting D-Bus User Message Bus Socket...
Dec 10 18:51:38 np0005554310.novalocal systemd[4296]: Starting Create User's Volatile Files and Directories...
Dec 10 18:51:38 np0005554310.novalocal systemd[4296]: Finished Create User's Volatile Files and Directories.
Dec 10 18:51:38 np0005554310.novalocal systemd[4296]: Listening on D-Bus User Message Bus Socket.
Dec 10 18:51:38 np0005554310.novalocal systemd[4296]: Reached target Sockets.
Dec 10 18:51:38 np0005554310.novalocal systemd[4296]: Reached target Basic System.
Dec 10 18:51:38 np0005554310.novalocal systemd[4296]: Reached target Main User Target.
Dec 10 18:51:38 np0005554310.novalocal systemd[4296]: Startup finished in 123ms.
Dec 10 18:51:38 np0005554310.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 10 18:51:38 np0005554310.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 10 18:51:38 np0005554310.novalocal sshd-session[4292]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 18:51:38 np0005554310.novalocal python3[4379]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 18:51:40 np0005554310.novalocal python3[4407]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 18:51:46 np0005554310.novalocal python3[4465]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 18:51:47 np0005554310.novalocal python3[4505]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 10 18:51:48 np0005554310.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 10 18:51:49 np0005554310.novalocal python3[4533]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcKkIDN5MfaZNsVpjD996hSY0DEEzT08Yg6aT11cazRpKyAcAQAiJYune2AYvWFcaGeI6TWfwa6LHIFU8nYJ3UhOiEjnn4choaSAbm12WFophJp6Lv1rE5zAjG3U4xPY/gvGq8EwGdgFJc/JIARO4Z2Y16tMDb7pUHGNBqwrbegbmpV79evDTAeqoGfYUc1NU0dDpqVqA0skWH17KNmJJNiPEbKad3Rd2mVUXABLpCBZwJmMxvkg0Ig5EXsrGWAZr23YmOSPOgEOd4sc/sXzxSLJrVyNj3oe6ibyjrGgoEqBV/vE9GndshZcUP5cm/JuGS7jVB6zFK3hrAhepCHQKMaCH+x5cKErJ24F7IL3bW2skau/bMKfJG6EncuYrxu9ZhEpxXvm02LCB8P+H3gI/lm/5SHvi4XvUhJ85yLWlno9Y+Xm31PBsqCDujjengXQ6QHZCdILP0FHMIf1QRnimYIMueRkoNdnrdfH/EBiclIHjfSMAaEfU/OlufoxUbbYs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:51:49 np0005554310.novalocal python3[4557]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:51:50 np0005554310.novalocal python3[4656]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 18:51:50 np0005554310.novalocal python3[4727]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765392709.8952997-207-256889882943735/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=ffae9eeb8ec149e09deeb7bdc3d1d724_id_rsa follow=False checksum=abf4a54628a610c7e08c055b76e2e39ba784312a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:51:51 np0005554310.novalocal python3[4850]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 18:51:51 np0005554310.novalocal python3[4921]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765392710.7819011-240-223488073622713/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=ffae9eeb8ec149e09deeb7bdc3d1d724_id_rsa.pub follow=False checksum=c49135b1e037bf038213c3aa84598b352f663939 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:51:52 np0005554310.novalocal python3[4969]: ansible-ping Invoked with data=pong
Dec 10 18:51:53 np0005554310.novalocal python3[4993]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 18:51:55 np0005554310.novalocal python3[5051]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 10 18:51:56 np0005554310.novalocal python3[5083]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:51:57 np0005554310.novalocal python3[5107]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:51:57 np0005554310.novalocal python3[5131]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:51:57 np0005554310.novalocal python3[5155]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:51:57 np0005554310.novalocal python3[5179]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:51:58 np0005554310.novalocal python3[5203]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:51:59 np0005554310.novalocal sudo[5227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anifrdbcrkddfhntljohmdwvkmqwtsmw ; /usr/bin/python3'
Dec 10 18:51:59 np0005554310.novalocal sudo[5227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:51:59 np0005554310.novalocal python3[5229]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:51:59 np0005554310.novalocal sudo[5227]: pam_unix(sudo:session): session closed for user root
Dec 10 18:52:00 np0005554310.novalocal sudo[5305]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yusxvisrzzriwcpptfemuujcbstrsvqg ; /usr/bin/python3'
Dec 10 18:52:00 np0005554310.novalocal sudo[5305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:52:00 np0005554310.novalocal python3[5307]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 18:52:00 np0005554310.novalocal sudo[5305]: pam_unix(sudo:session): session closed for user root
Dec 10 18:52:00 np0005554310.novalocal sudo[5378]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ornzldwqkeepotzkbvcmqvcindlurequ ; /usr/bin/python3'
Dec 10 18:52:00 np0005554310.novalocal sudo[5378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:52:01 np0005554310.novalocal python3[5380]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765392720.0692968-21-227052233853272/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:52:01 np0005554310.novalocal sudo[5378]: pam_unix(sudo:session): session closed for user root
Dec 10 18:52:01 np0005554310.novalocal python3[5428]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:01 np0005554310.novalocal python3[5452]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:02 np0005554310.novalocal python3[5476]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:02 np0005554310.novalocal python3[5500]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:02 np0005554310.novalocal python3[5524]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:03 np0005554310.novalocal python3[5548]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:03 np0005554310.novalocal python3[5572]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:03 np0005554310.novalocal python3[5596]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:03 np0005554310.novalocal python3[5620]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:04 np0005554310.novalocal python3[5644]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:04 np0005554310.novalocal python3[5668]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:04 np0005554310.novalocal python3[5692]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:04 np0005554310.novalocal python3[5716]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:05 np0005554310.novalocal python3[5740]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:05 np0005554310.novalocal python3[5764]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:05 np0005554310.novalocal python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:06 np0005554310.novalocal python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:06 np0005554310.novalocal sshd-session[5765]: Received disconnect from 193.46.255.20 port 59134:11:  [preauth]
Dec 10 18:52:06 np0005554310.novalocal sshd-session[5765]: Disconnected from authenticating user root 193.46.255.20 port 59134 [preauth]
Dec 10 18:52:06 np0005554310.novalocal python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:06 np0005554310.novalocal python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:06 np0005554310.novalocal python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:07 np0005554310.novalocal python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:07 np0005554310.novalocal python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:07 np0005554310.novalocal python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:07 np0005554310.novalocal python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:08 np0005554310.novalocal python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:08 np0005554310.novalocal python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 18:52:11 np0005554310.novalocal sudo[6054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovbampmbimngrwrfasoofgivlgurcayw ; /usr/bin/python3'
Dec 10 18:52:11 np0005554310.novalocal sudo[6054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:52:11 np0005554310.novalocal python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 10 18:52:11 np0005554310.novalocal systemd[1]: Starting Time & Date Service...
Dec 10 18:52:11 np0005554310.novalocal systemd[1]: Started Time & Date Service.
Dec 10 18:52:11 np0005554310.novalocal systemd-timedated[6058]: Changed time zone to 'UTC' (UTC).
Dec 10 18:52:11 np0005554310.novalocal sudo[6054]: pam_unix(sudo:session): session closed for user root
Dec 10 18:52:11 np0005554310.novalocal sudo[6085]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlzcivolmpbvsxzvfayksnbxnoyqxjzg ; /usr/bin/python3'
Dec 10 18:52:11 np0005554310.novalocal sudo[6085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:52:11 np0005554310.novalocal python3[6087]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:52:11 np0005554310.novalocal sudo[6085]: pam_unix(sudo:session): session closed for user root
Dec 10 18:52:12 np0005554310.novalocal python3[6163]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 18:52:12 np0005554310.novalocal python3[6234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765392732.1612096-153-38109849394746/source _original_basename=tmpro1cs04o follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:52:13 np0005554310.novalocal python3[6334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 18:52:13 np0005554310.novalocal python3[6405]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765392733.004919-183-70204742671338/source _original_basename=tmp6ab_f7kw follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:52:14 np0005554310.novalocal sudo[6505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xudmzkdgqpqczndgehdvwhouigcqaiwn ; /usr/bin/python3'
Dec 10 18:52:14 np0005554310.novalocal sudo[6505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:52:14 np0005554310.novalocal python3[6507]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 18:52:14 np0005554310.novalocal sudo[6505]: pam_unix(sudo:session): session closed for user root
Dec 10 18:52:14 np0005554310.novalocal sudo[6578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mncobvffozhwmojpsoxokelcxsejtazf ; /usr/bin/python3'
Dec 10 18:52:14 np0005554310.novalocal sudo[6578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:52:14 np0005554310.novalocal python3[6580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765392734.2014532-231-245108086718661/source _original_basename=tmpbhq1olen follow=False checksum=5af11a2484d4a32bfd779dd7279c8c1bc46ad659 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:52:14 np0005554310.novalocal sudo[6578]: pam_unix(sudo:session): session closed for user root
Dec 10 18:52:15 np0005554310.novalocal python3[6628]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 18:52:15 np0005554310.novalocal python3[6654]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 18:52:16 np0005554310.novalocal sudo[6732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oghuxlbbmrfupdqwpaghqvhzkgjgltgb ; /usr/bin/python3'
Dec 10 18:52:16 np0005554310.novalocal sudo[6732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:52:16 np0005554310.novalocal python3[6734]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 18:52:16 np0005554310.novalocal sudo[6732]: pam_unix(sudo:session): session closed for user root
Dec 10 18:52:16 np0005554310.novalocal sudo[6805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfcryfdofilsnzfpaqeqvbnxuryvcxhy ; /usr/bin/python3'
Dec 10 18:52:16 np0005554310.novalocal sudo[6805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:52:16 np0005554310.novalocal python3[6807]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765392736.0260944-273-251289522056188/source _original_basename=tmpdel_hkbt follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:52:16 np0005554310.novalocal sudo[6805]: pam_unix(sudo:session): session closed for user root
Dec 10 18:52:17 np0005554310.novalocal sudo[6856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayhgwwbwpcnwgwimuucaqisoxxecader ; /usr/bin/python3'
Dec 10 18:52:17 np0005554310.novalocal sudo[6856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:52:17 np0005554310.novalocal python3[6858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-c2e1-20db-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 18:52:17 np0005554310.novalocal sudo[6856]: pam_unix(sudo:session): session closed for user root
Dec 10 18:52:17 np0005554310.novalocal python3[6886]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-c2e1-20db-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 10 18:52:19 np0005554310.novalocal python3[6914]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:52:37 np0005554310.novalocal sudo[6938]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wajwvqvyfumtnqbwovrrcmnkgsgeikqd ; /usr/bin/python3'
Dec 10 18:52:37 np0005554310.novalocal sudo[6938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:52:37 np0005554310.novalocal python3[6940]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:52:37 np0005554310.novalocal sudo[6938]: pam_unix(sudo:session): session closed for user root
Dec 10 18:52:41 np0005554310.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 10 18:53:11 np0005554310.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 10 18:53:11 np0005554310.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 10 18:53:11 np0005554310.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 10 18:53:11 np0005554310.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 10 18:53:11 np0005554310.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 10 18:53:11 np0005554310.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 10 18:53:11 np0005554310.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 10 18:53:11 np0005554310.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 10 18:53:11 np0005554310.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 10 18:53:11 np0005554310.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 10 18:53:11 np0005554310.novalocal NetworkManager[856]: <info>  [1765392791.8407] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 10 18:53:11 np0005554310.novalocal systemd-udevd[6943]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 18:53:11 np0005554310.novalocal NetworkManager[856]: <info>  [1765392791.8567] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 18:53:11 np0005554310.novalocal NetworkManager[856]: <info>  [1765392791.8600] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 10 18:53:11 np0005554310.novalocal NetworkManager[856]: <info>  [1765392791.8604] device (eth1): carrier: link connected
Dec 10 18:53:11 np0005554310.novalocal NetworkManager[856]: <info>  [1765392791.8607] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 10 18:53:11 np0005554310.novalocal NetworkManager[856]: <info>  [1765392791.8615] policy: auto-activating connection 'Wired connection 1' (a1c87774-911c-3957-85ba-28b52f665aa6)
Dec 10 18:53:11 np0005554310.novalocal NetworkManager[856]: <info>  [1765392791.8619] device (eth1): Activation: starting connection 'Wired connection 1' (a1c87774-911c-3957-85ba-28b52f665aa6)
Dec 10 18:53:11 np0005554310.novalocal NetworkManager[856]: <info>  [1765392791.8620] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 10 18:53:11 np0005554310.novalocal NetworkManager[856]: <info>  [1765392791.8623] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 10 18:53:11 np0005554310.novalocal NetworkManager[856]: <info>  [1765392791.8628] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 10 18:53:11 np0005554310.novalocal NetworkManager[856]: <info>  [1765392791.8632] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 10 18:53:12 np0005554310.novalocal python3[6970]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-aabf-d44e-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 18:53:19 np0005554310.novalocal sudo[7048]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tonmungktlybcbxoakbdwnadfawwfkmb ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 10 18:53:19 np0005554310.novalocal sudo[7048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:53:20 np0005554310.novalocal python3[7050]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 18:53:20 np0005554310.novalocal sudo[7048]: pam_unix(sudo:session): session closed for user root
Dec 10 18:53:20 np0005554310.novalocal sudo[7121]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrywoumjcqqcyuvofjlvtrtweozsoaez ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 10 18:53:20 np0005554310.novalocal sudo[7121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:53:20 np0005554310.novalocal python3[7123]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765392799.7032826-102-169018248205125/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=d245d87bbd0b9c0880c050e988c986bb19f55f3c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:53:20 np0005554310.novalocal sudo[7121]: pam_unix(sudo:session): session closed for user root
Dec 10 18:53:21 np0005554310.novalocal sudo[7171]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojbguyltphtrapklnwacwygrmjeqqylj ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 10 18:53:21 np0005554310.novalocal sudo[7171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:53:21 np0005554310.novalocal python3[7173]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 18:53:21 np0005554310.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 10 18:53:21 np0005554310.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 10 18:53:21 np0005554310.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 10 18:53:21 np0005554310.novalocal systemd[1]: Stopping Network Manager...
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[856]: <info>  [1765392801.2728] caught SIGTERM, shutting down normally.
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[856]: <info>  [1765392801.2737] dhcp4 (eth0): canceled DHCP transaction
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[856]: <info>  [1765392801.2737] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[856]: <info>  [1765392801.2737] dhcp4 (eth0): state changed no lease
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[856]: <info>  [1765392801.2740] manager: NetworkManager state is now CONNECTING
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[856]: <info>  [1765392801.2866] dhcp4 (eth1): canceled DHCP transaction
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[856]: <info>  [1765392801.2866] dhcp4 (eth1): state changed no lease
Dec 10 18:53:21 np0005554310.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[856]: <info>  [1765392801.2951] exiting (success)
Dec 10 18:53:21 np0005554310.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 10 18:53:21 np0005554310.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 10 18:53:21 np0005554310.novalocal systemd[1]: Stopped Network Manager.
Dec 10 18:53:21 np0005554310.novalocal systemd[1]: Starting Network Manager...
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.3440] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:94b3788e-ef1c-48b5-bcf4-2732c1663990)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.3442] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.3501] manager[0x55b3adf07000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 10 18:53:21 np0005554310.novalocal systemd[1]: Starting Hostname Service...
Dec 10 18:53:21 np0005554310.novalocal systemd[1]: Started Hostname Service.
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4419] hostname: hostname: using hostnamed
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4420] hostname: static hostname changed from (none) to "np0005554310.novalocal"
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4423] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4427] manager[0x55b3adf07000]: rfkill: Wi-Fi hardware radio set enabled
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4427] manager[0x55b3adf07000]: rfkill: WWAN hardware radio set enabled
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4450] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4450] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4451] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4451] manager: Networking is enabled by state file
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4453] settings: Loaded settings plugin: keyfile (internal)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4457] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4479] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4493] dhcp: init: Using DHCP client 'internal'
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4498] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4508] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4519] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4532] device (lo): Activation: starting connection 'lo' (f2373871-aaf0-4c91-b3c1-62ecfbed22d7)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4543] device (eth0): carrier: link connected
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4549] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4554] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4555] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4563] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4572] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4577] device (eth1): carrier: link connected
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4581] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4584] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (a1c87774-911c-3957-85ba-28b52f665aa6) (indicated)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4585] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4589] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4595] device (eth1): Activation: starting connection 'Wired connection 1' (a1c87774-911c-3957-85ba-28b52f665aa6)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4601] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4605] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4607] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 10 18:53:21 np0005554310.novalocal systemd[1]: Started Network Manager.
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4609] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4610] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4612] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4614] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4617] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4619] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4632] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4636] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4645] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4648] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4665] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4667] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4673] device (lo): Activation: successful, device activated.
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4683] dhcp4 (eth0): state changed new lease, address=38.102.83.158
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4692] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 10 18:53:21 np0005554310.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4760] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4777] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4779] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4782] manager: NetworkManager state is now CONNECTED_SITE
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4785] device (eth0): Activation: successful, device activated.
Dec 10 18:53:21 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392801.4792] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 10 18:53:21 np0005554310.novalocal sudo[7171]: pam_unix(sudo:session): session closed for user root
Dec 10 18:53:21 np0005554310.novalocal python3[7257]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-aabf-d44e-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 18:53:31 np0005554310.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 10 18:53:50 np0005554310.novalocal systemd[4296]: Starting Mark boot as successful...
Dec 10 18:53:50 np0005554310.novalocal systemd[4296]: Finished Mark boot as successful.
Dec 10 18:53:51 np0005554310.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9089] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 10 18:54:06 np0005554310.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 10 18:54:06 np0005554310.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9436] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9443] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9460] device (eth1): Activation: successful, device activated.
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9478] manager: startup complete
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9483] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <warn>  [1765392846.9501] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9522] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 10 18:54:06 np0005554310.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9586] dhcp4 (eth1): canceled DHCP transaction
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9587] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9587] dhcp4 (eth1): state changed no lease
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9623] policy: auto-activating connection 'ci-private-network' (c9d23b9c-193f-548f-8fcf-eba6bd4e3cbf)
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9634] device (eth1): Activation: starting connection 'ci-private-network' (c9d23b9c-193f-548f-8fcf-eba6bd4e3cbf)
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9637] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9648] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9665] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9689] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9745] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9752] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 10 18:54:06 np0005554310.novalocal NetworkManager[7185]: <info>  [1765392846.9768] device (eth1): Activation: successful, device activated.
Dec 10 18:54:17 np0005554310.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 10 18:54:21 np0005554310.novalocal sshd-session[4306]: Received disconnect from 38.102.83.114 port 46918:11: disconnected by user
Dec 10 18:54:21 np0005554310.novalocal sshd-session[4306]: Disconnected from user zuul 38.102.83.114 port 46918
Dec 10 18:54:21 np0005554310.novalocal sshd-session[4292]: pam_unix(sshd:session): session closed for user zuul
Dec 10 18:54:21 np0005554310.novalocal systemd-logind[789]: Session 1 logged out. Waiting for processes to exit.
Dec 10 18:54:24 np0005554310.novalocal sshd-session[7287]: Accepted publickey for zuul from 38.102.83.114 port 51606 ssh2: RSA SHA256:L/SCRhDD2hlgP35vi6MGkgCM80jHQm/zqk6LaU3Vz9U
Dec 10 18:54:24 np0005554310.novalocal systemd-logind[789]: New session 3 of user zuul.
Dec 10 18:54:24 np0005554310.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 10 18:54:24 np0005554310.novalocal sshd-session[7287]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 18:54:25 np0005554310.novalocal sudo[7366]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gonqaburxneptsndogrougfpotvaxnuu ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 10 18:54:25 np0005554310.novalocal sudo[7366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:54:25 np0005554310.novalocal python3[7368]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 18:54:25 np0005554310.novalocal sudo[7366]: pam_unix(sudo:session): session closed for user root
Dec 10 18:54:25 np0005554310.novalocal sudo[7439]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsuxqqxmulgztwhcdadroblnsfkwvogj ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 10 18:54:25 np0005554310.novalocal sudo[7439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 18:54:25 np0005554310.novalocal python3[7441]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765392864.874348-259-236494109077112/source _original_basename=tmp6rnu56k6 follow=False checksum=0e84c1a8382e0b408ac4ffc8129c181e43363e45 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 18:54:25 np0005554310.novalocal sudo[7439]: pam_unix(sudo:session): session closed for user root
Dec 10 18:54:27 np0005554310.novalocal sshd-session[7290]: Connection closed by 38.102.83.114 port 51606
Dec 10 18:54:27 np0005554310.novalocal sshd-session[7287]: pam_unix(sshd:session): session closed for user zuul
Dec 10 18:54:27 np0005554310.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 10 18:54:27 np0005554310.novalocal systemd-logind[789]: Session 3 logged out. Waiting for processes to exit.
Dec 10 18:54:27 np0005554310.novalocal systemd-logind[789]: Removed session 3.
Dec 10 18:56:49 np0005554310.novalocal systemd[4296]: Created slice User Background Tasks Slice.
Dec 10 18:56:50 np0005554310.novalocal systemd[4296]: Starting Cleanup of User's Temporary Files and Directories...
Dec 10 18:56:50 np0005554310.novalocal systemd[4296]: Finished Cleanup of User's Temporary Files and Directories.
Dec 10 19:00:45 np0005554310.novalocal sshd-session[7469]: Connection closed by 36.133.44.67 port 49598
Dec 10 19:01:01 np0005554310.novalocal CROND[7472]: (root) CMD (run-parts /etc/cron.hourly)
Dec 10 19:01:01 np0005554310.novalocal run-parts[7475]: (/etc/cron.hourly) starting 0anacron
Dec 10 19:01:01 np0005554310.novalocal anacron[7483]: Anacron started on 2025-12-10
Dec 10 19:01:01 np0005554310.novalocal anacron[7483]: Will run job `cron.daily' in 24 min.
Dec 10 19:01:01 np0005554310.novalocal anacron[7483]: Will run job `cron.weekly' in 44 min.
Dec 10 19:01:01 np0005554310.novalocal anacron[7483]: Will run job `cron.monthly' in 64 min.
Dec 10 19:01:01 np0005554310.novalocal anacron[7483]: Jobs will be executed sequentially
Dec 10 19:01:01 np0005554310.novalocal run-parts[7485]: (/etc/cron.hourly) finished 0anacron
Dec 10 19:01:01 np0005554310.novalocal CROND[7471]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 10 19:01:38 np0005554310.novalocal sshd-session[7487]: Accepted publickey for zuul from 38.102.83.114 port 39244 ssh2: RSA SHA256:L/SCRhDD2hlgP35vi6MGkgCM80jHQm/zqk6LaU3Vz9U
Dec 10 19:01:38 np0005554310.novalocal systemd-logind[789]: New session 4 of user zuul.
Dec 10 19:01:38 np0005554310.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 10 19:01:38 np0005554310.novalocal sshd-session[7487]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:01:38 np0005554310.novalocal sudo[7514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sownksyebfuagzgcifpssciczencwoyi ; /usr/bin/python3'
Dec 10 19:01:38 np0005554310.novalocal sudo[7514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:38 np0005554310.novalocal python3[7516]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-d2d5-6d9a-000000001f2d-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:01:38 np0005554310.novalocal sudo[7514]: pam_unix(sudo:session): session closed for user root
Dec 10 19:01:38 np0005554310.novalocal sudo[7543]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emjwhvzbjxuiwgjxpzraqplfefuowsom ; /usr/bin/python3'
Dec 10 19:01:38 np0005554310.novalocal sudo[7543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:39 np0005554310.novalocal python3[7545]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:01:39 np0005554310.novalocal sudo[7543]: pam_unix(sudo:session): session closed for user root
Dec 10 19:01:39 np0005554310.novalocal sudo[7569]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccvgtjvbgpfodnnlnymefojpfyzmgcza ; /usr/bin/python3'
Dec 10 19:01:39 np0005554310.novalocal sudo[7569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:39 np0005554310.novalocal python3[7571]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:01:39 np0005554310.novalocal sudo[7569]: pam_unix(sudo:session): session closed for user root
Dec 10 19:01:39 np0005554310.novalocal sudo[7595]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahwfqkfrrtalxwwtvxmeszmmuraetzqq ; /usr/bin/python3'
Dec 10 19:01:39 np0005554310.novalocal sudo[7595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:39 np0005554310.novalocal python3[7597]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:01:39 np0005554310.novalocal sudo[7595]: pam_unix(sudo:session): session closed for user root
Dec 10 19:01:39 np0005554310.novalocal sudo[7621]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieevbminrrsnirzkmniqhzbchlntykjp ; /usr/bin/python3'
Dec 10 19:01:39 np0005554310.novalocal sudo[7621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:39 np0005554310.novalocal python3[7623]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:01:39 np0005554310.novalocal sudo[7621]: pam_unix(sudo:session): session closed for user root
Dec 10 19:01:40 np0005554310.novalocal sudo[7647]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwviuinvixhsyxoyrzgqpjdzsmfsnuez ; /usr/bin/python3'
Dec 10 19:01:40 np0005554310.novalocal sudo[7647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:40 np0005554310.novalocal python3[7649]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:01:40 np0005554310.novalocal sudo[7647]: pam_unix(sudo:session): session closed for user root
Dec 10 19:01:40 np0005554310.novalocal sudo[7725]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnpasoyxpurdtjvhorzhkphludrwfxdw ; /usr/bin/python3'
Dec 10 19:01:40 np0005554310.novalocal sudo[7725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:40 np0005554310.novalocal python3[7727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 19:01:40 np0005554310.novalocal sudo[7725]: pam_unix(sudo:session): session closed for user root
Dec 10 19:01:41 np0005554310.novalocal sudo[7798]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmwzryasiypqqtddeztpyqvpkqscqsst ; /usr/bin/python3'
Dec 10 19:01:41 np0005554310.novalocal sudo[7798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:41 np0005554310.novalocal python3[7800]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765393300.5720332-503-185969089289360/source _original_basename=tmpnqilu7wb follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:01:41 np0005554310.novalocal sudo[7798]: pam_unix(sudo:session): session closed for user root
Dec 10 19:01:41 np0005554310.novalocal sudo[7848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-conlhxouzibwmztlrdcefvhqalswuwbc ; /usr/bin/python3'
Dec 10 19:01:41 np0005554310.novalocal sudo[7848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:42 np0005554310.novalocal python3[7850]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:01:42 np0005554310.novalocal systemd[1]: Reloading.
Dec 10 19:01:42 np0005554310.novalocal systemd-rc-local-generator[7870]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:01:42 np0005554310.novalocal sudo[7848]: pam_unix(sudo:session): session closed for user root
Dec 10 19:01:43 np0005554310.novalocal sudo[7905]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osrbwlureduedcoxyynzeshyonmncnrs ; /usr/bin/python3'
Dec 10 19:01:43 np0005554310.novalocal sudo[7905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:43 np0005554310.novalocal python3[7907]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 10 19:01:43 np0005554310.novalocal sudo[7905]: pam_unix(sudo:session): session closed for user root
Dec 10 19:01:43 np0005554310.novalocal sudo[7931]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oehpmgheyinmykkfxjvfybsdensrxrsb ; /usr/bin/python3'
Dec 10 19:01:43 np0005554310.novalocal sudo[7931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:44 np0005554310.novalocal python3[7933]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:01:44 np0005554310.novalocal sudo[7931]: pam_unix(sudo:session): session closed for user root
Dec 10 19:01:44 np0005554310.novalocal sudo[7959]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-octoqlzvvaqykspypiwcfqwiomanqdwn ; /usr/bin/python3'
Dec 10 19:01:44 np0005554310.novalocal sudo[7959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:44 np0005554310.novalocal python3[7961]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:01:44 np0005554310.novalocal sudo[7959]: pam_unix(sudo:session): session closed for user root
Dec 10 19:01:44 np0005554310.novalocal sudo[7987]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjdnyiskhxefenfujlcfladztwqxuuxo ; /usr/bin/python3'
Dec 10 19:01:44 np0005554310.novalocal sudo[7987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:44 np0005554310.novalocal python3[7989]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:01:44 np0005554310.novalocal sudo[7987]: pam_unix(sudo:session): session closed for user root
Dec 10 19:01:44 np0005554310.novalocal sudo[8015]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idicwwsrbecxezlxclfategrunssltec ; /usr/bin/python3'
Dec 10 19:01:44 np0005554310.novalocal sudo[8015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:44 np0005554310.novalocal python3[8017]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:01:44 np0005554310.novalocal sudo[8015]: pam_unix(sudo:session): session closed for user root
Dec 10 19:01:45 np0005554310.novalocal python3[8044]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-d2d5-6d9a-000000001f34-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:01:46 np0005554310.novalocal python3[8074]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 10 19:01:48 np0005554310.novalocal sshd-session[7490]: Connection closed by 38.102.83.114 port 39244
Dec 10 19:01:48 np0005554310.novalocal sshd-session[7487]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:01:48 np0005554310.novalocal systemd-logind[789]: Session 4 logged out. Waiting for processes to exit.
Dec 10 19:01:48 np0005554310.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 10 19:01:48 np0005554310.novalocal systemd[1]: session-4.scope: Consumed 4.100s CPU time.
Dec 10 19:01:48 np0005554310.novalocal systemd-logind[789]: Removed session 4.
Dec 10 19:01:49 np0005554310.novalocal sshd-session[8079]: Accepted publickey for zuul from 38.102.83.114 port 48686 ssh2: RSA SHA256:L/SCRhDD2hlgP35vi6MGkgCM80jHQm/zqk6LaU3Vz9U
Dec 10 19:01:49 np0005554310.novalocal systemd-logind[789]: New session 5 of user zuul.
Dec 10 19:01:49 np0005554310.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 10 19:01:49 np0005554310.novalocal sshd-session[8079]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:01:50 np0005554310.novalocal sudo[8106]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxsjkpozjrukzsmgbhztikagrorymuis ; /usr/bin/python3'
Dec 10 19:01:50 np0005554310.novalocal sudo[8106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:01:50 np0005554310.novalocal python3[8108]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 10 19:02:06 np0005554310.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 10 19:02:06 np0005554310.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 10 19:02:06 np0005554310.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 10 19:02:06 np0005554310.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 10 19:02:06 np0005554310.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 10 19:02:06 np0005554310.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 10 19:02:06 np0005554310.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 10 19:02:06 np0005554310.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 10 19:02:18 np0005554310.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 10 19:02:18 np0005554310.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 10 19:02:18 np0005554310.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 10 19:02:18 np0005554310.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 10 19:02:18 np0005554310.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 10 19:02:18 np0005554310.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 10 19:02:18 np0005554310.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 10 19:02:18 np0005554310.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 10 19:02:27 np0005554310.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 10 19:02:27 np0005554310.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 10 19:02:27 np0005554310.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 10 19:02:27 np0005554310.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 10 19:02:27 np0005554310.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 10 19:02:27 np0005554310.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 10 19:02:27 np0005554310.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 10 19:02:27 np0005554310.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 10 19:02:28 np0005554310.novalocal setsebool[8175]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 10 19:02:28 np0005554310.novalocal setsebool[8175]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 10 19:02:40 np0005554310.novalocal kernel: SELinux:  Converting 389 SID table entries...
Dec 10 19:02:40 np0005554310.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 10 19:02:40 np0005554310.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 10 19:02:40 np0005554310.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 10 19:02:40 np0005554310.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 10 19:02:40 np0005554310.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 10 19:02:40 np0005554310.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 10 19:02:40 np0005554310.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 10 19:02:48 np0005554310.novalocal sshd[1004]: Timeout before authentication for connection from 36.133.44.67 to 38.102.83.158, pid = 7470
Dec 10 19:02:59 np0005554310.novalocal dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 10 19:02:59 np0005554310.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 10 19:02:59 np0005554310.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 10 19:02:59 np0005554310.novalocal systemd[1]: Reloading.
Dec 10 19:02:59 np0005554310.novalocal systemd-rc-local-generator[8933]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:02:59 np0005554310.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 10 19:03:00 np0005554310.novalocal sudo[8106]: pam_unix(sudo:session): session closed for user root
Dec 10 19:03:01 np0005554310.novalocal python3[10629]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-8a3f-243c-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:03:02 np0005554310.novalocal kernel: evm: overlay not supported
Dec 10 19:03:02 np0005554310.novalocal systemd[4296]: Starting D-Bus User Message Bus...
Dec 10 19:03:02 np0005554310.novalocal dbus-broker-launch[11773]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 10 19:03:02 np0005554310.novalocal dbus-broker-launch[11773]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 10 19:03:02 np0005554310.novalocal systemd[4296]: Started D-Bus User Message Bus.
Dec 10 19:03:02 np0005554310.novalocal dbus-broker-lau[11773]: Ready
Dec 10 19:03:02 np0005554310.novalocal systemd[4296]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 10 19:03:02 np0005554310.novalocal systemd[4296]: Created slice Slice /user.
Dec 10 19:03:02 np0005554310.novalocal systemd[4296]: podman-11625.scope: unit configures an IP firewall, but not running as root.
Dec 10 19:03:02 np0005554310.novalocal systemd[4296]: (This warning is only shown for the first unit using IP firewalling.)
Dec 10 19:03:02 np0005554310.novalocal systemd[4296]: Started podman-11625.scope.
Dec 10 19:03:02 np0005554310.novalocal systemd[4296]: Started podman-pause-edf13f71.scope.
Dec 10 19:03:03 np0005554310.novalocal sudo[12459]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbckjeswimuupetfwqlpugugzbcrweyc ; /usr/bin/python3'
Dec 10 19:03:03 np0005554310.novalocal sudo[12459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:03:03 np0005554310.novalocal python3[12485]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.65:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.65:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:03:03 np0005554310.novalocal python3[12485]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 10 19:03:03 np0005554310.novalocal sudo[12459]: pam_unix(sudo:session): session closed for user root
Dec 10 19:03:03 np0005554310.novalocal sshd-session[8082]: Connection closed by 38.102.83.114 port 48686
Dec 10 19:03:03 np0005554310.novalocal sshd-session[8079]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:03:03 np0005554310.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 10 19:03:03 np0005554310.novalocal systemd[1]: session-5.scope: Consumed 1min 955ms CPU time.
Dec 10 19:03:03 np0005554310.novalocal systemd-logind[789]: Session 5 logged out. Waiting for processes to exit.
Dec 10 19:03:03 np0005554310.novalocal systemd-logind[789]: Removed session 5.
Dec 10 19:03:22 np0005554310.novalocal sshd-session[21105]: Unable to negotiate with 38.102.83.132 port 41996: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 10 19:03:22 np0005554310.novalocal sshd-session[21099]: Unable to negotiate with 38.102.83.132 port 42010: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 10 19:03:22 np0005554310.novalocal sshd-session[21106]: Connection closed by 38.102.83.132 port 41984 [preauth]
Dec 10 19:03:22 np0005554310.novalocal sshd-session[21103]: Connection closed by 38.102.83.132 port 41986 [preauth]
Dec 10 19:03:22 np0005554310.novalocal sshd-session[21101]: Unable to negotiate with 38.102.83.132 port 41994: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 10 19:03:26 np0005554310.novalocal irqbalance[780]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 10 19:03:26 np0005554310.novalocal irqbalance[780]: IRQ 27 affinity is now unmanaged
Dec 10 19:03:28 np0005554310.novalocal sshd-session[23267]: Accepted publickey for zuul from 38.102.83.114 port 35742 ssh2: RSA SHA256:L/SCRhDD2hlgP35vi6MGkgCM80jHQm/zqk6LaU3Vz9U
Dec 10 19:03:28 np0005554310.novalocal systemd-logind[789]: New session 6 of user zuul.
Dec 10 19:03:28 np0005554310.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 10 19:03:28 np0005554310.novalocal sshd-session[23267]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:03:28 np0005554310.novalocal python3[23373]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLmOK1A5oXW51Q5bg6xVIuf59RLV0KenWQfee0C3saFh5xIV+rW1EBs9vsuCLjr05iAVdXESY3muJ3D1wDmmgRE= zuul@np0005554309.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 19:03:28 np0005554310.novalocal sudo[23551]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgtfkanouteoqnjdgdqhhgvalqrfwakt ; /usr/bin/python3'
Dec 10 19:03:28 np0005554310.novalocal sudo[23551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:03:28 np0005554310.novalocal python3[23560]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLmOK1A5oXW51Q5bg6xVIuf59RLV0KenWQfee0C3saFh5xIV+rW1EBs9vsuCLjr05iAVdXESY3muJ3D1wDmmgRE= zuul@np0005554309.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 19:03:28 np0005554310.novalocal sudo[23551]: pam_unix(sudo:session): session closed for user root
Dec 10 19:03:29 np0005554310.novalocal sudo[23848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umwoqdfnhqlonhysblnspcimhftcvlfc ; /usr/bin/python3'
Dec 10 19:03:29 np0005554310.novalocal sudo[23848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:03:29 np0005554310.novalocal python3[23863]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005554310.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 10 19:03:29 np0005554310.novalocal useradd[23945]: new group: name=cloud-admin, GID=1002
Dec 10 19:03:29 np0005554310.novalocal useradd[23945]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 10 19:03:29 np0005554310.novalocal sudo[23848]: pam_unix(sudo:session): session closed for user root
Dec 10 19:03:29 np0005554310.novalocal sudo[24075]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ashjdrzzsudfbeqvtuufkncshdcxiuak ; /usr/bin/python3'
Dec 10 19:03:29 np0005554310.novalocal sudo[24075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:03:29 np0005554310.novalocal python3[24085]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLmOK1A5oXW51Q5bg6xVIuf59RLV0KenWQfee0C3saFh5xIV+rW1EBs9vsuCLjr05iAVdXESY3muJ3D1wDmmgRE= zuul@np0005554309.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 10 19:03:29 np0005554310.novalocal sudo[24075]: pam_unix(sudo:session): session closed for user root
Dec 10 19:03:30 np0005554310.novalocal sudo[24343]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lptvmqlhbsdvhdjrfzoailwyudkajhxn ; /usr/bin/python3'
Dec 10 19:03:30 np0005554310.novalocal sudo[24343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:03:30 np0005554310.novalocal python3[24351]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 19:03:30 np0005554310.novalocal sudo[24343]: pam_unix(sudo:session): session closed for user root
Dec 10 19:03:30 np0005554310.novalocal sudo[24598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sphlvijuyojldvpgdvenqqsjtsmrshkc ; /usr/bin/python3'
Dec 10 19:03:30 np0005554310.novalocal sudo[24598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:03:30 np0005554310.novalocal python3[24608]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765393410.0095925-135-120226501087092/source _original_basename=tmpd03j22y9 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:03:30 np0005554310.novalocal sudo[24598]: pam_unix(sudo:session): session closed for user root
Dec 10 19:03:31 np0005554310.novalocal sudo[24912]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itxgttvkogwcvffqjvdpwlpinzokmlcx ; /usr/bin/python3'
Dec 10 19:03:31 np0005554310.novalocal sudo[24912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:03:31 np0005554310.novalocal python3[24921]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 10 19:03:31 np0005554310.novalocal systemd[1]: Starting Hostname Service...
Dec 10 19:03:31 np0005554310.novalocal systemd[1]: Started Hostname Service.
Dec 10 19:03:31 np0005554310.novalocal systemd-hostnamed[25018]: Changed pretty hostname to 'compute-0'
Dec 10 19:03:31 compute-0 systemd-hostnamed[25018]: Hostname set to <compute-0> (static)
Dec 10 19:03:31 compute-0 NetworkManager[7185]: <info>  [1765393411.6006] hostname: static hostname changed from "np0005554310.novalocal" to "compute-0"
Dec 10 19:03:31 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 10 19:03:31 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 10 19:03:31 compute-0 sudo[24912]: pam_unix(sudo:session): session closed for user root
Dec 10 19:03:31 compute-0 sshd-session[23314]: Connection closed by 38.102.83.114 port 35742
Dec 10 19:03:31 compute-0 sshd-session[23267]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:03:31 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Dec 10 19:03:31 compute-0 systemd[1]: session-6.scope: Consumed 2.137s CPU time.
Dec 10 19:03:31 compute-0 systemd-logind[789]: Session 6 logged out. Waiting for processes to exit.
Dec 10 19:03:31 compute-0 systemd-logind[789]: Removed session 6.
Dec 10 19:03:41 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 10 19:03:44 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 10 19:03:44 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 10 19:03:44 compute-0 systemd[1]: man-db-cache-update.service: Consumed 53.415s CPU time.
Dec 10 19:03:44 compute-0 systemd[1]: run-r32124405b8f840cfabd7053451d75afa.service: Deactivated successfully.
Dec 10 19:04:01 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 10 19:06:50 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 10 19:06:50 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 10 19:06:50 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 10 19:06:50 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 10 19:08:56 compute-0 sshd-session[29937]: Accepted publickey for zuul from 38.102.83.132 port 35712 ssh2: RSA SHA256:L/SCRhDD2hlgP35vi6MGkgCM80jHQm/zqk6LaU3Vz9U
Dec 10 19:08:56 compute-0 systemd-logind[789]: New session 7 of user zuul.
Dec 10 19:08:56 compute-0 systemd[1]: Started Session 7 of User zuul.
Dec 10 19:08:56 compute-0 sshd-session[29937]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:08:57 compute-0 python3[30013]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:08:58 compute-0 sudo[30127]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-julodvcmadjdslrhoypmvxhkvvbvwyym ; /usr/bin/python3'
Dec 10 19:08:58 compute-0 sudo[30127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:08:58 compute-0 python3[30129]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 19:08:58 compute-0 sudo[30127]: pam_unix(sudo:session): session closed for user root
Dec 10 19:08:58 compute-0 sudo[30200]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfgovmrsryfjabtsoxnlwukbdqcusiqm ; /usr/bin/python3'
Dec 10 19:08:58 compute-0 sudo[30200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:08:59 compute-0 python3[30202]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765393738.1809163-33585-75394705201481/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:08:59 compute-0 sudo[30200]: pam_unix(sudo:session): session closed for user root
Dec 10 19:08:59 compute-0 sudo[30226]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxepwdvkzctziiwtkvuvhjtprqrckrpe ; /usr/bin/python3'
Dec 10 19:08:59 compute-0 sudo[30226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:08:59 compute-0 python3[30228]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 19:08:59 compute-0 sudo[30226]: pam_unix(sudo:session): session closed for user root
Dec 10 19:08:59 compute-0 sudo[30299]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahavjdtikvkascujnnjnnxjdtldfepfs ; /usr/bin/python3'
Dec 10 19:08:59 compute-0 sudo[30299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:08:59 compute-0 python3[30301]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765393738.1809163-33585-75394705201481/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:08:59 compute-0 sudo[30299]: pam_unix(sudo:session): session closed for user root
Dec 10 19:08:59 compute-0 sudo[30325]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agicfhvtcuxyyrtddfftnwjbxeffixyr ; /usr/bin/python3'
Dec 10 19:08:59 compute-0 sudo[30325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:09:00 compute-0 python3[30327]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 19:09:00 compute-0 sudo[30325]: pam_unix(sudo:session): session closed for user root
Dec 10 19:09:00 compute-0 sudo[30398]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ophsnxhsaxuuljfhwsthuqatpsstxtnt ; /usr/bin/python3'
Dec 10 19:09:00 compute-0 sudo[30398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:09:00 compute-0 python3[30400]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765393738.1809163-33585-75394705201481/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:09:00 compute-0 sudo[30398]: pam_unix(sudo:session): session closed for user root
Dec 10 19:09:00 compute-0 sudo[30424]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xamxdhhmhfmokhxzuegwikdrwqdxocqm ; /usr/bin/python3'
Dec 10 19:09:00 compute-0 sudo[30424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:09:00 compute-0 python3[30426]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 19:09:00 compute-0 sudo[30424]: pam_unix(sudo:session): session closed for user root
Dec 10 19:09:00 compute-0 sudo[30497]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfoqvvcwsisuuqqsmmqynddvclnqyyex ; /usr/bin/python3'
Dec 10 19:09:00 compute-0 sudo[30497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:09:00 compute-0 python3[30499]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765393738.1809163-33585-75394705201481/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:09:01 compute-0 sudo[30497]: pam_unix(sudo:session): session closed for user root
Dec 10 19:09:01 compute-0 sudo[30523]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djdovlytrzjmoehrcszlqvsrhqixmeff ; /usr/bin/python3'
Dec 10 19:09:01 compute-0 sudo[30523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:09:01 compute-0 python3[30525]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 19:09:01 compute-0 sudo[30523]: pam_unix(sudo:session): session closed for user root
Dec 10 19:09:01 compute-0 sudo[30596]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkxoyxujxztyxyqdcezxsvuleiaccpgh ; /usr/bin/python3'
Dec 10 19:09:01 compute-0 sudo[30596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:09:01 compute-0 python3[30598]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765393738.1809163-33585-75394705201481/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:09:01 compute-0 sudo[30596]: pam_unix(sudo:session): session closed for user root
Dec 10 19:09:01 compute-0 sudo[30622]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfctfzhdirwostqaefcanyrtgtnnhrrw ; /usr/bin/python3'
Dec 10 19:09:01 compute-0 sudo[30622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:09:01 compute-0 python3[30624]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 19:09:01 compute-0 sudo[30622]: pam_unix(sudo:session): session closed for user root
Dec 10 19:09:01 compute-0 sudo[30695]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hleshxfmmnshxswttaykepewjbminurg ; /usr/bin/python3'
Dec 10 19:09:01 compute-0 sudo[30695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:09:02 compute-0 python3[30697]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765393738.1809163-33585-75394705201481/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:09:02 compute-0 sudo[30695]: pam_unix(sudo:session): session closed for user root
Dec 10 19:09:02 compute-0 sudo[30721]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlqkdxbyxduqskulmawxddlxgojmlkbx ; /usr/bin/python3'
Dec 10 19:09:02 compute-0 sudo[30721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:09:02 compute-0 python3[30723]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 10 19:09:02 compute-0 sudo[30721]: pam_unix(sudo:session): session closed for user root
Dec 10 19:09:02 compute-0 sudo[30794]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyrsjriyvyefqazwogyzbaqtjgvohwij ; /usr/bin/python3'
Dec 10 19:09:02 compute-0 sudo[30794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:09:02 compute-0 python3[30796]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765393738.1809163-33585-75394705201481/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:09:02 compute-0 sudo[30794]: pam_unix(sudo:session): session closed for user root
Dec 10 19:09:04 compute-0 sshd-session[30821]: Connection closed by 192.168.122.11 port 36402 [preauth]
Dec 10 19:09:04 compute-0 sshd-session[30822]: Unable to negotiate with 192.168.122.11 port 36422: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 10 19:09:04 compute-0 sshd-session[30823]: Connection closed by 192.168.122.11 port 36392 [preauth]
Dec 10 19:09:04 compute-0 sshd-session[30824]: Unable to negotiate with 192.168.122.11 port 36418: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 10 19:09:04 compute-0 sshd-session[30826]: Unable to negotiate with 192.168.122.11 port 36414: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 10 19:09:08 compute-0 sshd-session[30831]: Received disconnect from 193.46.255.159 port 49604:11:  [preauth]
Dec 10 19:09:08 compute-0 sshd-session[30831]: Disconnected from authenticating user root 193.46.255.159 port 49604 [preauth]
Dec 10 19:11:46 compute-0 python3[30857]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:16:46 compute-0 sshd-session[29940]: Received disconnect from 38.102.83.132 port 35712:11: disconnected by user
Dec 10 19:16:46 compute-0 sshd-session[29940]: Disconnected from user zuul 38.102.83.132 port 35712
Dec 10 19:16:46 compute-0 sshd-session[29937]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:16:46 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Dec 10 19:16:46 compute-0 systemd[1]: session-7.scope: Consumed 4.485s CPU time.
Dec 10 19:16:46 compute-0 systemd-logind[789]: Session 7 logged out. Waiting for processes to exit.
Dec 10 19:16:46 compute-0 systemd-logind[789]: Removed session 7.
Dec 10 19:25:01 compute-0 anacron[7483]: Job `cron.daily' started
Dec 10 19:25:01 compute-0 anacron[7483]: Job `cron.daily' terminated
Dec 10 19:25:17 compute-0 sshd-session[30866]: Accepted publickey for zuul from 192.168.122.30 port 49476 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:25:17 compute-0 systemd-logind[789]: New session 8 of user zuul.
Dec 10 19:25:17 compute-0 systemd[1]: Started Session 8 of User zuul.
Dec 10 19:25:17 compute-0 sshd-session[30866]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:25:18 compute-0 python3.9[31019]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:25:19 compute-0 sudo[31198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzcyjjkpflwffwlmjdxedkanzpphfpeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394718.8960319-32-157586496645548/AnsiballZ_command.py'
Dec 10 19:25:19 compute-0 sudo[31198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:25:19 compute-0 python3.9[31200]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:25:28 compute-0 sudo[31198]: pam_unix(sudo:session): session closed for user root
Dec 10 19:25:28 compute-0 sshd-session[30869]: Connection closed by 192.168.122.30 port 49476
Dec 10 19:25:28 compute-0 sshd-session[30866]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:25:28 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Dec 10 19:25:28 compute-0 systemd[1]: session-8.scope: Consumed 8.332s CPU time.
Dec 10 19:25:28 compute-0 systemd-logind[789]: Session 8 logged out. Waiting for processes to exit.
Dec 10 19:25:28 compute-0 systemd-logind[789]: Removed session 8.
Dec 10 19:25:34 compute-0 sshd-session[31257]: Accepted publickey for zuul from 192.168.122.30 port 37710 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:25:34 compute-0 systemd-logind[789]: New session 9 of user zuul.
Dec 10 19:25:34 compute-0 systemd[1]: Started Session 9 of User zuul.
Dec 10 19:25:34 compute-0 sshd-session[31257]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:25:35 compute-0 python3.9[31410]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:25:35 compute-0 sshd-session[31260]: Connection closed by 192.168.122.30 port 37710
Dec 10 19:25:35 compute-0 sshd-session[31257]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:25:35 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Dec 10 19:25:35 compute-0 systemd-logind[789]: Session 9 logged out. Waiting for processes to exit.
Dec 10 19:25:35 compute-0 systemd-logind[789]: Removed session 9.
Dec 10 19:25:51 compute-0 sshd-session[31438]: Accepted publickey for zuul from 192.168.122.30 port 43562 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:25:51 compute-0 systemd-logind[789]: New session 10 of user zuul.
Dec 10 19:25:51 compute-0 systemd[1]: Started Session 10 of User zuul.
Dec 10 19:25:51 compute-0 sshd-session[31438]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:25:52 compute-0 python3.9[31591]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 10 19:25:53 compute-0 python3.9[31765]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:25:54 compute-0 sudo[31915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxyrfnswnexirxwsilagtiligfcclmtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394754.2469807-45-189420587503002/AnsiballZ_command.py'
Dec 10 19:25:54 compute-0 sudo[31915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:25:54 compute-0 python3.9[31917]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:25:54 compute-0 sudo[31915]: pam_unix(sudo:session): session closed for user root
Dec 10 19:25:56 compute-0 sudo[32068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlglwquadpjwelzsupwhihjibpwpxzsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394755.5844793-57-141715647728280/AnsiballZ_stat.py'
Dec 10 19:25:56 compute-0 sudo[32068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:25:56 compute-0 python3.9[32070]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:25:56 compute-0 sudo[32068]: pam_unix(sudo:session): session closed for user root
Dec 10 19:25:56 compute-0 sudo[32220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtclurudygaonwwdmkumkbkuinfsxzly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394756.3402982-65-70740782498915/AnsiballZ_file.py'
Dec 10 19:25:56 compute-0 sudo[32220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:25:56 compute-0 python3.9[32222]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:25:56 compute-0 sudo[32220]: pam_unix(sudo:session): session closed for user root
Dec 10 19:25:57 compute-0 sudo[32372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qauqwdyqtqegaaljudphktpzimnyhyqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394757.0800462-73-141800779347642/AnsiballZ_stat.py'
Dec 10 19:25:57 compute-0 sudo[32372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:25:57 compute-0 python3.9[32374]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:25:57 compute-0 sudo[32372]: pam_unix(sudo:session): session closed for user root
Dec 10 19:25:58 compute-0 sudo[32495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpdjchvujrwsmqsxxavrajyecbrfjpsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394757.0800462-73-141800779347642/AnsiballZ_copy.py'
Dec 10 19:25:58 compute-0 sudo[32495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:25:58 compute-0 python3.9[32497]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765394757.0800462-73-141800779347642/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:25:58 compute-0 sudo[32495]: pam_unix(sudo:session): session closed for user root
Dec 10 19:25:58 compute-0 sudo[32647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ultanjfmtjebhrojqrujjcdlkuutffmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394758.460956-88-28363777132668/AnsiballZ_setup.py'
Dec 10 19:25:58 compute-0 sudo[32647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:25:59 compute-0 python3.9[32649]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:25:59 compute-0 sudo[32647]: pam_unix(sudo:session): session closed for user root
Dec 10 19:25:59 compute-0 sudo[32803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdauexiwofoldofrsjctxcuhmjprqwbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394759.332054-96-57856649558934/AnsiballZ_file.py'
Dec 10 19:25:59 compute-0 sudo[32803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:25:59 compute-0 python3.9[32805]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:25:59 compute-0 sudo[32803]: pam_unix(sudo:session): session closed for user root
Dec 10 19:26:00 compute-0 sudo[32955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmfhaigtzzcqkrxqobsnuylhuahxyjlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394759.9476552-105-81490555868067/AnsiballZ_file.py'
Dec 10 19:26:00 compute-0 sudo[32955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:26:00 compute-0 python3.9[32957]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:26:00 compute-0 sudo[32955]: pam_unix(sudo:session): session closed for user root
Dec 10 19:26:01 compute-0 python3.9[33107]: ansible-ansible.builtin.service_facts Invoked
Dec 10 19:26:04 compute-0 python3.9[33360]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:26:04 compute-0 python3.9[33510]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:26:06 compute-0 python3.9[33664]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:26:07 compute-0 sudo[33820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnykkjpnkxcpngmlchhfptmdnpdlmmgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394766.9523745-153-202864477590918/AnsiballZ_setup.py'
Dec 10 19:26:07 compute-0 sudo[33820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:26:07 compute-0 python3.9[33822]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:26:07 compute-0 sudo[33820]: pam_unix(sudo:session): session closed for user root
Dec 10 19:26:08 compute-0 sudo[33904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plwfvmasxqscfyiodbfqnjiylklmltbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394766.9523745-153-202864477590918/AnsiballZ_dnf.py'
Dec 10 19:26:08 compute-0 sudo[33904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:26:08 compute-0 python3.9[33906]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:26:24 compute-0 sshd-session[34020]: Received disconnect from 193.46.255.33 port 40238:11:  [preauth]
Dec 10 19:26:24 compute-0 sshd-session[34020]: Disconnected from authenticating user root 193.46.255.33 port 40238 [preauth]
Dec 10 19:26:26 compute-0 irqbalance[780]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 10 19:26:26 compute-0 irqbalance[780]: IRQ 26 affinity is now unmanaged
Dec 10 19:26:49 compute-0 systemd[1]: Reloading.
Dec 10 19:26:49 compute-0 systemd-rc-local-generator[34101]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:26:49 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 10 19:26:50 compute-0 systemd[1]: Reloading.
Dec 10 19:26:50 compute-0 systemd-rc-local-generator[34148]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:26:50 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 10 19:26:50 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 10 19:26:50 compute-0 systemd[1]: Reloading.
Dec 10 19:26:50 compute-0 systemd-rc-local-generator[34185]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:26:50 compute-0 systemd[1]: Starting dnf makecache...
Dec 10 19:26:50 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 10 19:26:50 compute-0 dnf[34195]: Failed determining last makecache time.
Dec 10 19:26:50 compute-0 dnf[34195]: delorean-openstack-barbican-42b4c41831408a8e323 140 kB/s | 3.0 kB     00:00
Dec 10 19:26:50 compute-0 dnf[34195]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 161 kB/s | 3.0 kB     00:00
Dec 10 19:26:50 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec 10 19:26:50 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec 10 19:26:50 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec 10 19:26:50 compute-0 dnf[34195]: delorean-openstack-cinder-1c00d6490d88e436f26ef 153 kB/s | 3.0 kB     00:00
Dec 10 19:26:50 compute-0 dnf[34195]: delorean-python-stevedore-c4acc5639fd2329372142 160 kB/s | 3.0 kB     00:00
Dec 10 19:26:50 compute-0 dnf[34195]: delorean-python-cloudkitty-tests-tempest-2c80f8 163 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: delorean-os-refresh-config-9bfc52b5049be2d8de61 185 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 158 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: delorean-python-designate-tests-tempest-347fdbc 150 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: delorean-openstack-glance-1fd12c29b339f30fe823e 151 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 158 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: delorean-openstack-manila-3c01b7181572c95dac462 153 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: delorean-python-whitebox-neutron-tests-tempest- 164 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: delorean-openstack-octavia-ba397f07a7331190208c 162 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: delorean-openstack-watcher-c014f81a8647287f6dcc 164 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: delorean-ansible-config_template-5ccaa22121a7ff 165 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 166 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: delorean-openstack-swift-dc98a8463506ac520c469a 169 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: delorean-python-tempestconf-8515371b7cceebd4282 168 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: delorean-openstack-heat-ui-013accbfd179753bc3f0 165 kB/s | 3.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: CentOS Stream 9 - BaseOS                         31 kB/s | 7.0 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: CentOS Stream 9 - AppStream                      74 kB/s | 7.4 kB     00:00
Dec 10 19:26:51 compute-0 dnf[34195]: CentOS Stream 9 - CRB                            31 kB/s | 6.9 kB     00:00
Dec 10 19:26:52 compute-0 dnf[34195]: CentOS Stream 9 - Extras packages                74 kB/s | 8.3 kB     00:00
Dec 10 19:26:52 compute-0 dnf[34195]: dlrn-antelope-testing                           111 kB/s | 3.0 kB     00:00
Dec 10 19:26:52 compute-0 dnf[34195]: dlrn-antelope-build-deps                        113 kB/s | 3.0 kB     00:00
Dec 10 19:26:52 compute-0 dnf[34195]: centos9-rabbitmq                                103 kB/s | 3.0 kB     00:00
Dec 10 19:26:52 compute-0 dnf[34195]: centos9-storage                                  80 kB/s | 3.0 kB     00:00
Dec 10 19:26:52 compute-0 dnf[34195]: centos9-opstools                                115 kB/s | 3.0 kB     00:00
Dec 10 19:26:52 compute-0 dnf[34195]: NFV SIG OpenvSwitch                             143 kB/s | 3.0 kB     00:00
Dec 10 19:26:52 compute-0 dnf[34195]: repo-setup-centos-appstream                     174 kB/s | 4.4 kB     00:00
Dec 10 19:26:52 compute-0 dnf[34195]: repo-setup-centos-baseos                        138 kB/s | 3.9 kB     00:00
Dec 10 19:26:52 compute-0 dnf[34195]: repo-setup-centos-highavailability              165 kB/s | 3.9 kB     00:00
Dec 10 19:26:52 compute-0 dnf[34195]: repo-setup-centos-powertools                    190 kB/s | 4.3 kB     00:00
Dec 10 19:26:52 compute-0 dnf[34195]: Extra Packages for Enterprise Linux 9 - x86_64  206 kB/s |  34 kB     00:00
Dec 10 19:26:53 compute-0 dnf[34195]: Metadata cache created.
Dec 10 19:26:53 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 10 19:26:53 compute-0 systemd[1]: Finished dnf makecache.
Dec 10 19:26:53 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.655s CPU time.
Dec 10 19:27:56 compute-0 kernel: SELinux:  Converting 2720 SID table entries...
Dec 10 19:27:56 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 10 19:27:56 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 10 19:27:56 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 10 19:27:56 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 10 19:27:56 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 10 19:27:56 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 10 19:27:56 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 10 19:27:57 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 10 19:27:57 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 10 19:27:57 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 10 19:27:57 compute-0 systemd[1]: Reloading.
Dec 10 19:27:57 compute-0 systemd-rc-local-generator[34539]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:27:57 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 10 19:27:59 compute-0 sudo[33904]: pam_unix(sudo:session): session closed for user root
Dec 10 19:27:59 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 10 19:27:59 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 10 19:27:59 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.332s CPU time.
Dec 10 19:27:59 compute-0 systemd[1]: run-r9081cf4a0fc047f79e5c46219b558d81.service: Deactivated successfully.
Dec 10 19:27:59 compute-0 sudo[35451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flmwpvtszxxcnbtkbykspgjndvstczvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394879.2311566-165-31540167166683/AnsiballZ_command.py'
Dec 10 19:27:59 compute-0 sudo[35451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:27:59 compute-0 python3.9[35453]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:28:00 compute-0 sudo[35451]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:01 compute-0 sudo[35732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqqvnyzaryffslmkziefrdexyssiixvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394880.530527-173-71497349551497/AnsiballZ_selinux.py'
Dec 10 19:28:01 compute-0 sudo[35732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:01 compute-0 python3.9[35734]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 10 19:28:01 compute-0 sudo[35732]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:01 compute-0 sudo[35884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycimivaquajsntztrkaruklfbljkfgen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394881.7614005-184-244398141836511/AnsiballZ_command.py'
Dec 10 19:28:01 compute-0 sudo[35884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:02 compute-0 python3.9[35886]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 10 19:28:03 compute-0 sudo[35884]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:04 compute-0 sudo[36038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqxlghstkmeisjpmqmhbiacyqzkgqodk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394884.282684-192-20990181882677/AnsiballZ_file.py'
Dec 10 19:28:04 compute-0 sudo[36038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:04 compute-0 python3.9[36040]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:28:04 compute-0 sudo[36038]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:05 compute-0 sudo[36190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcwnwfofmsldwznmcleudjqpqdddbmdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394884.9114022-200-263793693973652/AnsiballZ_mount.py'
Dec 10 19:28:05 compute-0 sudo[36190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:05 compute-0 python3.9[36192]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 10 19:28:05 compute-0 sudo[36190]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:06 compute-0 sudo[36342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmcrmprlwjtmchilnphxyhawnpyznoaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394886.6662679-228-148717687592882/AnsiballZ_file.py'
Dec 10 19:28:06 compute-0 sudo[36342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:07 compute-0 python3.9[36344]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:28:07 compute-0 sudo[36342]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:07 compute-0 sudo[36494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpdhodvaqddlohpxlvwzvnlikriwould ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394887.2685442-236-184586562888526/AnsiballZ_stat.py'
Dec 10 19:28:07 compute-0 sudo[36494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:07 compute-0 python3.9[36496]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:28:07 compute-0 sudo[36494]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:08 compute-0 sudo[36617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aekmyfievdiddyubbvijmcxqggjvtxoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394887.2685442-236-184586562888526/AnsiballZ_copy.py'
Dec 10 19:28:08 compute-0 sudo[36617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:08 compute-0 python3.9[36619]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765394887.2685442-236-184586562888526/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ed0eab07f33e7bd10540e1c9a3e81b31631a82c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:28:08 compute-0 sudo[36617]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:09 compute-0 sudo[36769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzzpxhhcugbkzcrqjidpgsummvufawpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394888.7835286-260-209056771205589/AnsiballZ_stat.py'
Dec 10 19:28:09 compute-0 sudo[36769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:09 compute-0 python3.9[36771]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:28:09 compute-0 sudo[36769]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:09 compute-0 sudo[36921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqioyfienmfaaonntzkiwridzxrpfqnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394889.3795552-268-226481806895827/AnsiballZ_command.py'
Dec 10 19:28:09 compute-0 sudo[36921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:09 compute-0 python3.9[36923]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:28:09 compute-0 sudo[36921]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:10 compute-0 sudo[37074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhtllksldkxechfkvftyshdxxvfshyjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394890.1081004-276-206978570517637/AnsiballZ_file.py'
Dec 10 19:28:10 compute-0 sudo[37074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:12 compute-0 python3.9[37076]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:28:12 compute-0 sudo[37074]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:13 compute-0 sudo[37226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgcxbwqvqmgzetaeyknlfuazyeyljdzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394892.9460227-287-144673209472324/AnsiballZ_getent.py'
Dec 10 19:28:13 compute-0 sudo[37226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:15 compute-0 python3.9[37228]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 10 19:28:15 compute-0 sudo[37226]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:15 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 19:28:15 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 19:28:15 compute-0 sudo[37380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlwnpxovmoonhzvojxraowkvhsgybpfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394895.5327375-295-47458793260229/AnsiballZ_group.py'
Dec 10 19:28:15 compute-0 sudo[37380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:16 compute-0 python3.9[37382]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 10 19:28:16 compute-0 groupadd[37383]: group added to /etc/group: name=qemu, GID=107
Dec 10 19:28:16 compute-0 groupadd[37383]: group added to /etc/gshadow: name=qemu
Dec 10 19:28:16 compute-0 groupadd[37383]: new group: name=qemu, GID=107
Dec 10 19:28:16 compute-0 sudo[37380]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:17 compute-0 sudo[37538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oefbvxunhyaijkftnxmkghwsmnyhekka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394896.4810932-303-122285713059928/AnsiballZ_user.py'
Dec 10 19:28:17 compute-0 sudo[37538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:17 compute-0 python3.9[37540]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 10 19:28:17 compute-0 useradd[37542]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 10 19:28:17 compute-0 sudo[37538]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:17 compute-0 sudo[37698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oefuwoeemagrcvwqdiklhlviprrexywm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394897.5233476-311-12619749170142/AnsiballZ_getent.py'
Dec 10 19:28:17 compute-0 sudo[37698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:17 compute-0 python3.9[37700]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 10 19:28:18 compute-0 sudo[37698]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:18 compute-0 sudo[37851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwoymjfcemphxosjfgtjtlkiegvjyslw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394898.1775076-319-32001852718702/AnsiballZ_group.py'
Dec 10 19:28:18 compute-0 sudo[37851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:18 compute-0 python3.9[37853]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 10 19:28:18 compute-0 groupadd[37854]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 10 19:28:18 compute-0 groupadd[37854]: group added to /etc/gshadow: name=hugetlbfs
Dec 10 19:28:18 compute-0 groupadd[37854]: new group: name=hugetlbfs, GID=42477
Dec 10 19:28:18 compute-0 sudo[37851]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:19 compute-0 sudo[38009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwstxwwqdmoytutpabcxdlycjpirpxrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394899.0277264-328-122097432877805/AnsiballZ_file.py'
Dec 10 19:28:19 compute-0 sudo[38009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:19 compute-0 python3.9[38011]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 10 19:28:19 compute-0 sudo[38009]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:20 compute-0 sudo[38161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmrewhpnbuwtaszmwfhuwpdkjfgirmlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394899.8623312-339-258675623120424/AnsiballZ_dnf.py'
Dec 10 19:28:20 compute-0 sudo[38161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:20 compute-0 python3.9[38163]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:28:22 compute-0 sudo[38161]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:22 compute-0 sudo[38314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncnklkfsaqwcgulksokzmagqlxwmbfdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394902.389128-347-261992770772142/AnsiballZ_file.py'
Dec 10 19:28:22 compute-0 sudo[38314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:22 compute-0 python3.9[38316]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:28:22 compute-0 sudo[38314]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:23 compute-0 sudo[38466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrjtsjibnjgrptuspiumxqzmulzfkjof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394902.9719796-355-253257285216552/AnsiballZ_stat.py'
Dec 10 19:28:23 compute-0 sudo[38466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:23 compute-0 python3.9[38468]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:28:23 compute-0 sudo[38466]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:23 compute-0 sudo[38589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuakfjarzbyrworjhpfxyftedofxhjfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394902.9719796-355-253257285216552/AnsiballZ_copy.py'
Dec 10 19:28:23 compute-0 sudo[38589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:23 compute-0 python3.9[38591]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765394902.9719796-355-253257285216552/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:28:23 compute-0 sudo[38589]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:24 compute-0 sudo[38741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxvrbqcregfexipikyseuqqcltjnozcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394904.0960598-370-192257480532561/AnsiballZ_systemd.py'
Dec 10 19:28:24 compute-0 sudo[38741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:24 compute-0 python3.9[38743]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:28:25 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 10 19:28:25 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 10 19:28:25 compute-0 kernel: Bridge firewalling registered
Dec 10 19:28:25 compute-0 systemd-modules-load[38747]: Inserted module 'br_netfilter'
Dec 10 19:28:25 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 10 19:28:25 compute-0 sudo[38741]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:25 compute-0 sudo[38900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eryezuzosdfjmkqtweqlfqjubszunyty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394905.3440545-378-237724314032862/AnsiballZ_stat.py'
Dec 10 19:28:25 compute-0 sudo[38900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:25 compute-0 python3.9[38902]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:28:25 compute-0 sudo[38900]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:26 compute-0 sudo[39023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pokfdjwarifinssnqbgtzconkkglnsjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394905.3440545-378-237724314032862/AnsiballZ_copy.py'
Dec 10 19:28:26 compute-0 sudo[39023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:26 compute-0 python3.9[39025]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765394905.3440545-378-237724314032862/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:28:26 compute-0 sudo[39023]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:26 compute-0 sudo[39175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avkftvhfdiigvzubfvwxbnsishsflast ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394906.6303027-396-56656590840886/AnsiballZ_dnf.py'
Dec 10 19:28:26 compute-0 sudo[39175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:27 compute-0 python3.9[39177]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:28:30 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec 10 19:28:30 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec 10 19:28:30 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 10 19:28:30 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 10 19:28:30 compute-0 systemd[1]: Reloading.
Dec 10 19:28:30 compute-0 systemd-rc-local-generator[39238]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:28:30 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 10 19:28:31 compute-0 sudo[39175]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:32 compute-0 python3.9[40559]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:28:32 compute-0 python3.9[41559]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 10 19:28:33 compute-0 python3.9[42281]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:28:34 compute-0 sudo[43167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbfclmrfctdlfdaiwccmifepldfdxiyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394913.8498373-435-261460451761705/AnsiballZ_command.py'
Dec 10 19:28:34 compute-0 sudo[43167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:34 compute-0 python3.9[43192]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:28:34 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 10 19:28:34 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 10 19:28:34 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 10 19:28:34 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.650s CPU time.
Dec 10 19:28:34 compute-0 systemd[1]: run-r0dba44fae2954ab1b5fba5fb4a262d57.service: Deactivated successfully.
Dec 10 19:28:34 compute-0 systemd[1]: Starting Authorization Manager...
Dec 10 19:28:34 compute-0 polkitd[43571]: Started polkitd version 0.117
Dec 10 19:28:34 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 10 19:28:34 compute-0 polkitd[43571]: Loading rules from directory /etc/polkit-1/rules.d
Dec 10 19:28:34 compute-0 polkitd[43571]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 10 19:28:34 compute-0 polkitd[43571]: Finished loading, compiling and executing 2 rules
Dec 10 19:28:34 compute-0 systemd[1]: Started Authorization Manager.
Dec 10 19:28:34 compute-0 polkitd[43571]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 10 19:28:34 compute-0 sudo[43167]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:35 compute-0 sudo[43739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djkdskexlfaaxlwpzdljteahzrvmxtpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394915.0986485-444-134578130206875/AnsiballZ_systemd.py'
Dec 10 19:28:35 compute-0 sudo[43739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:35 compute-0 python3.9[43741]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:28:35 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 10 19:28:35 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 10 19:28:35 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 10 19:28:35 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 10 19:28:35 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 10 19:28:35 compute-0 sudo[43739]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:36 compute-0 python3.9[43902]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 10 19:28:38 compute-0 sudo[44052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekshyunzlwhxwssjefklrssdtzywyyoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394917.9709344-501-115011598476601/AnsiballZ_systemd.py'
Dec 10 19:28:38 compute-0 sudo[44052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:38 compute-0 python3.9[44054]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:28:38 compute-0 systemd[1]: Reloading.
Dec 10 19:28:38 compute-0 systemd-rc-local-generator[44081]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:28:38 compute-0 sudo[44052]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:39 compute-0 sudo[44240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkokcqaedmrlpxwvzdsvvyfrrtlcbudo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394919.0526907-501-157564378313055/AnsiballZ_systemd.py'
Dec 10 19:28:39 compute-0 sudo[44240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:39 compute-0 python3.9[44242]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:28:39 compute-0 systemd[1]: Reloading.
Dec 10 19:28:39 compute-0 systemd-rc-local-generator[44272]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:28:39 compute-0 sudo[44240]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:40 compute-0 sudo[44429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqcinydxciwwayrohptkoigegmkpvcxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394920.148912-517-11989428365115/AnsiballZ_command.py'
Dec 10 19:28:40 compute-0 sudo[44429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:40 compute-0 python3.9[44431]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:28:40 compute-0 sudo[44429]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:41 compute-0 sudo[44582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bssznwmtuhbjnmmwswendkukbplztdej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394920.8858466-525-144160411638415/AnsiballZ_command.py'
Dec 10 19:28:41 compute-0 sudo[44582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:41 compute-0 python3.9[44584]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:28:41 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 10 19:28:41 compute-0 sudo[44582]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:41 compute-0 sudo[44735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzqewgbumqybrcbkhletjakdwctsmsgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394921.5066893-533-36920865891224/AnsiballZ_command.py'
Dec 10 19:28:41 compute-0 sudo[44735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:41 compute-0 python3.9[44737]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:28:43 compute-0 sudo[44735]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:43 compute-0 sudo[44897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvhzoigaclgixwpyrmoqukotbxcwhrrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394923.5137832-541-113329952999486/AnsiballZ_command.py'
Dec 10 19:28:43 compute-0 sudo[44897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:43 compute-0 python3.9[44899]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:28:43 compute-0 sudo[44897]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:44 compute-0 sudo[45050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mprxpeklajcufifhrraxskarcevgflur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394924.0851784-549-67026558730343/AnsiballZ_systemd.py'
Dec 10 19:28:44 compute-0 sudo[45050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:44 compute-0 python3.9[45052]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:28:44 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 10 19:28:44 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Dec 10 19:28:44 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Dec 10 19:28:44 compute-0 systemd[1]: Starting Apply Kernel Variables...
Dec 10 19:28:44 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 10 19:28:44 compute-0 systemd[1]: Finished Apply Kernel Variables.
Dec 10 19:28:44 compute-0 sudo[45050]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:45 compute-0 sshd-session[31441]: Connection closed by 192.168.122.30 port 43562
Dec 10 19:28:45 compute-0 sshd-session[31438]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:28:45 compute-0 systemd-logind[789]: Session 10 logged out. Waiting for processes to exit.
Dec 10 19:28:45 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Dec 10 19:28:45 compute-0 systemd[1]: session-10.scope: Consumed 2min 11.784s CPU time.
Dec 10 19:28:45 compute-0 systemd-logind[789]: Removed session 10.
Dec 10 19:28:50 compute-0 sshd-session[45082]: Accepted publickey for zuul from 192.168.122.30 port 34074 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:28:50 compute-0 systemd-logind[789]: New session 11 of user zuul.
Dec 10 19:28:50 compute-0 systemd[1]: Started Session 11 of User zuul.
Dec 10 19:28:50 compute-0 sshd-session[45082]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:28:51 compute-0 python3.9[45235]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:28:52 compute-0 python3.9[45389]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:28:53 compute-0 sudo[45543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyeriogsgnogvgdhjndojqojyfcqbqks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394933.1863124-50-162431037482400/AnsiballZ_command.py'
Dec 10 19:28:53 compute-0 sudo[45543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:53 compute-0 python3.9[45545]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:28:53 compute-0 sudo[45543]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:54 compute-0 python3.9[45696]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:28:55 compute-0 sudo[45850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwyyzmrfqghkiurcsvvvgfoqozagpwzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394935.0374668-70-130413048717292/AnsiballZ_setup.py'
Dec 10 19:28:55 compute-0 sudo[45850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:55 compute-0 python3.9[45852]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:28:55 compute-0 sudo[45850]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:56 compute-0 sudo[45934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agicwxbwrnqnxpbontbvnjehxfcfaslj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394935.0374668-70-130413048717292/AnsiballZ_dnf.py'
Dec 10 19:28:56 compute-0 sudo[45934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:56 compute-0 python3.9[45936]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:28:57 compute-0 sudo[45934]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:58 compute-0 sudo[46087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icudwafvrgflmytuvdmdugytjzbysklo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394937.9131572-82-205884080876378/AnsiballZ_setup.py'
Dec 10 19:28:58 compute-0 sudo[46087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:58 compute-0 python3.9[46089]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:28:58 compute-0 sudo[46087]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:59 compute-0 sudo[46258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nidadolytezrdkyyucsqifgnykhobtmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394938.8199053-93-188770738395689/AnsiballZ_file.py'
Dec 10 19:28:59 compute-0 sudo[46258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:28:59 compute-0 python3.9[46260]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:28:59 compute-0 sudo[46258]: pam_unix(sudo:session): session closed for user root
Dec 10 19:28:59 compute-0 sudo[46410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iunumzvhvzqqkfrnjlainnkfidfvvntt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394939.6762393-101-155631182807195/AnsiballZ_command.py'
Dec 10 19:28:59 compute-0 sudo[46410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:00 compute-0 python3.9[46412]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:29:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4165875780-merged.mount: Deactivated successfully.
Dec 10 19:29:00 compute-0 podman[46413]: 2025-12-10 19:29:00.246032678 +0000 UTC m=+0.058383220 system refresh
Dec 10 19:29:00 compute-0 sudo[46410]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:00 compute-0 sudo[46574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogweyocffrkzihxhldnbkcgjbmmycxji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394940.4522324-109-114491528517468/AnsiballZ_stat.py'
Dec 10 19:29:00 compute-0 sudo[46574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:01 compute-0 python3.9[46576]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:29:01 compute-0 sudo[46574]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:29:01 compute-0 sudo[46697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rofdbfuurgxpskmukajqhgpeoldfydgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394940.4522324-109-114491528517468/AnsiballZ_copy.py'
Dec 10 19:29:01 compute-0 sudo[46697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:01 compute-0 python3.9[46699]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765394940.4522324-109-114491528517468/.source.json follow=False _original_basename=podman_network_config.j2 checksum=90f8d76a4e8917b56c5f519a63cb9aeaf4bdf772 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:29:01 compute-0 sudo[46697]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:02 compute-0 sudo[46849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtjyrkpjofyvuxypuezckttjntibfguh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394942.0950356-124-4520603902135/AnsiballZ_stat.py'
Dec 10 19:29:02 compute-0 sudo[46849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:02 compute-0 python3.9[46851]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:29:02 compute-0 sudo[46849]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:03 compute-0 sudo[46972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cskqukloctarzytsrhbacvpolfmyakmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394942.0950356-124-4520603902135/AnsiballZ_copy.py'
Dec 10 19:29:03 compute-0 sudo[46972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:03 compute-0 python3.9[46974]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765394942.0950356-124-4520603902135/.source.conf follow=False _original_basename=registries.conf.j2 checksum=c7e24e791b23b6ca9af1b87173047a0fb53188da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:29:03 compute-0 sudo[46972]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:03 compute-0 sudo[47124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npgfqjvhaqdiutvdevsdqnzfrwddcqmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394943.4449468-140-183921251729413/AnsiballZ_ini_file.py'
Dec 10 19:29:03 compute-0 sudo[47124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:04 compute-0 python3.9[47126]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:29:04 compute-0 sudo[47124]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:04 compute-0 sudo[47276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vruaziybvsdlapmmaivoaasrljtrkgfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394944.3911438-140-38400841195177/AnsiballZ_ini_file.py'
Dec 10 19:29:04 compute-0 sudo[47276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:04 compute-0 python3.9[47278]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:29:04 compute-0 sudo[47276]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:05 compute-0 sudo[47428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzuivtnmnwonpmaewdszjkbwropqczmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394945.114917-140-104234628623149/AnsiballZ_ini_file.py'
Dec 10 19:29:05 compute-0 sudo[47428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:05 compute-0 python3.9[47430]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:29:05 compute-0 sudo[47428]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:06 compute-0 sudo[47580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqndhclsfiumdwszzjbwvqzgiayckimv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394946.183887-140-130023491126077/AnsiballZ_ini_file.py'
Dec 10 19:29:06 compute-0 sudo[47580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:06 compute-0 python3.9[47582]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:29:06 compute-0 sudo[47580]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:07 compute-0 python3.9[47732]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:29:07 compute-0 sudo[47884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okdpblpmhhtmesukgmkuddqshlzlgupm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394947.6889868-180-81630779906971/AnsiballZ_dnf.py'
Dec 10 19:29:07 compute-0 sudo[47884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:08 compute-0 python3.9[47886]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 10 19:29:09 compute-0 sudo[47884]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:09 compute-0 sudo[48037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjtcbhhygnntsqxuounioqnmwvbdqxtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394949.6291866-188-43841389860606/AnsiballZ_dnf.py'
Dec 10 19:29:09 compute-0 sudo[48037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:10 compute-0 python3.9[48039]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 10 19:29:12 compute-0 sudo[48037]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:12 compute-0 sudo[48197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyqipqwlmudmffgrwrscgwjgzkuqtvck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394952.4540374-198-137209393258644/AnsiballZ_dnf.py'
Dec 10 19:29:12 compute-0 sudo[48197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:12 compute-0 python3.9[48199]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 10 19:29:14 compute-0 sudo[48197]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:14 compute-0 sudo[48350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atcipawrqdkazjiafxwuhynyjoozlyxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394954.5452194-207-57614524517047/AnsiballZ_dnf.py'
Dec 10 19:29:14 compute-0 sudo[48350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:15 compute-0 python3.9[48352]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 10 19:29:16 compute-0 sudo[48350]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:16 compute-0 sudo[48503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeqlctlcczwbowsjmpokxprehruqzesv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394956.6926448-218-46070632384937/AnsiballZ_dnf.py'
Dec 10 19:29:16 compute-0 sudo[48503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:17 compute-0 python3.9[48505]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 10 19:29:18 compute-0 sudo[48503]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:19 compute-0 sudo[48659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvbbqlygowcvdtqwjawwjfwbgcvpduoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394958.8606648-226-144039998829468/AnsiballZ_dnf.py'
Dec 10 19:29:19 compute-0 sudo[48659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:19 compute-0 python3.9[48661]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 10 19:29:22 compute-0 sudo[48659]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:22 compute-0 sudo[48829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzfqafrdcclwvxqvkjzjoptnjfjczdvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394962.3117468-235-62985513923422/AnsiballZ_dnf.py'
Dec 10 19:29:22 compute-0 sudo[48829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:22 compute-0 python3.9[48831]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 10 19:29:24 compute-0 sudo[48829]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:24 compute-0 sudo[48982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndnxyqzeedwhoyhyqjxtgpkeyqvemrju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394964.3083615-244-88550487838533/AnsiballZ_dnf.py'
Dec 10 19:29:24 compute-0 sudo[48982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:24 compute-0 python3.9[48984]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 10 19:29:36 compute-0 sudo[48982]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:37 compute-0 sudo[49318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgeckrbhctixquzurxyucdpzoyztrlyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394976.9224524-253-143150858347308/AnsiballZ_dnf.py'
Dec 10 19:29:37 compute-0 sudo[49318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:37 compute-0 python3.9[49320]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 10 19:29:38 compute-0 sudo[49318]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:39 compute-0 sudo[49474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bihlmhboawzvrfbbvrkybggmetvvgoav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394979.2091544-264-113849300191347/AnsiballZ_file.py'
Dec 10 19:29:39 compute-0 sudo[49474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:39 compute-0 python3.9[49476]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:29:39 compute-0 sudo[49474]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:40 compute-0 sudo[49649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bswkbbbojgrpmfumigjuznvdvhrcqfln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394979.8381262-272-7919604680338/AnsiballZ_stat.py'
Dec 10 19:29:40 compute-0 sudo[49649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:40 compute-0 python3.9[49651]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:29:40 compute-0 sudo[49649]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:40 compute-0 sudo[49772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpnrdwipruuwwqaygqunworptmjecvkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394979.8381262-272-7919604680338/AnsiballZ_copy.py'
Dec 10 19:29:40 compute-0 sudo[49772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:40 compute-0 python3.9[49774]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765394979.8381262-272-7919604680338/.source.json _original_basename=.kop8yewr follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:29:40 compute-0 sudo[49772]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:41 compute-0 sudo[49924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evcmnnusjqbixshbnhjztdpepozyryhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394981.211549-290-277719951379765/AnsiballZ_podman_image.py'
Dec 10 19:29:41 compute-0 sudo[49924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:41 compute-0 python3.9[49926]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 10 19:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3365587385-lower\x2dmapped.mount: Deactivated successfully.
Dec 10 19:29:48 compute-0 podman[49939]: 2025-12-10 19:29:48.809344107 +0000 UTC m=+6.780187569 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 10 19:29:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:29:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:29:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:29:48 compute-0 sudo[49924]: pam_unix(sudo:session): session closed for user root
Dec 10 19:29:49 compute-0 sudo[50234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmekcpxaduclzvkpcdekyazwgwddfgck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765394989.2525024-301-274451275854668/AnsiballZ_podman_image.py'
Dec 10 19:29:49 compute-0 sudo[50234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:29:49 compute-0 python3.9[50236]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 10 19:30:01 compute-0 podman[50248]: 2025-12-10 19:30:01.739376394 +0000 UTC m=+12.001393569 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 10 19:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:01 compute-0 sudo[50234]: pam_unix(sudo:session): session closed for user root
Dec 10 19:30:02 compute-0 sudo[50543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foqxgileoxrpwfcybfucwnozhhtuwhxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395002.1713798-311-272430317082819/AnsiballZ_podman_image.py'
Dec 10 19:30:02 compute-0 sudo[50543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:30:02 compute-0 python3.9[50545]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 10 19:30:03 compute-0 podman[50557]: 2025-12-10 19:30:03.882221452 +0000 UTC m=+1.232323098 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 10 19:30:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:04 compute-0 sudo[50543]: pam_unix(sudo:session): session closed for user root
Dec 10 19:30:04 compute-0 sudo[50787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npckhkrphajmpcjcnucbhzithkbrbyho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395004.3324494-320-236667552496358/AnsiballZ_podman_image.py'
Dec 10 19:30:04 compute-0 sudo[50787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:30:04 compute-0 python3.9[50789]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 10 19:30:16 compute-0 podman[50802]: 2025-12-10 19:30:16.830728859 +0000 UTC m=+11.948937871 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 10 19:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:17 compute-0 sudo[50787]: pam_unix(sudo:session): session closed for user root
Dec 10 19:30:17 compute-0 sudo[51096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtakgnxrhwssjygaiwqntiunyaxkvnpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395017.4173987-331-182281598759654/AnsiballZ_podman_image.py'
Dec 10 19:30:17 compute-0 sudo[51096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:30:17 compute-0 python3.9[51098]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 10 19:30:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:37 compute-0 podman[51110]: 2025-12-10 19:30:37.348139693 +0000 UTC m=+19.467705943 image pull 56c883f8f40c5930eb627315cd44b817f13b3afba240562a68f6f941d942bd50 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec 10 19:30:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:37 compute-0 sudo[51096]: pam_unix(sudo:session): session closed for user root
Dec 10 19:30:37 compute-0 sudo[51427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isxcihafmnfqkejbnekdjimedxbknwfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395037.6839728-331-144152213420118/AnsiballZ_podman_image.py'
Dec 10 19:30:37 compute-0 sudo[51427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:30:38 compute-0 python3.9[51429]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 10 19:30:39 compute-0 podman[51443]: 2025-12-10 19:30:39.666728421 +0000 UTC m=+1.437461785 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec 10 19:30:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:39 compute-0 sudo[51427]: pam_unix(sudo:session): session closed for user root
Dec 10 19:30:40 compute-0 sudo[51709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spmbzaucigibxwjjschygjjaphodrtor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395040.165621-347-2843764880295/AnsiballZ_podman_image.py'
Dec 10 19:30:40 compute-0 sudo[51709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:30:40 compute-0 python3.9[51711]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 10 19:30:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:43 compute-0 podman[51723]: 2025-12-10 19:30:43.700699022 +0000 UTC m=+2.948943186 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec 10 19:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:43 compute-0 sudo[51709]: pam_unix(sudo:session): session closed for user root
Dec 10 19:30:44 compute-0 sudo[51978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsxtqcdvmkzinxqhveqjzifcrsklnmfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395044.1679416-347-15631246477626/AnsiballZ_podman_image.py'
Dec 10 19:30:44 compute-0 sudo[51978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:30:44 compute-0 python3.9[51980]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 10 19:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:51 compute-0 podman[51994]: 2025-12-10 19:30:51.718204722 +0000 UTC m=+6.864458657 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec 10 19:30:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:30:52 compute-0 sudo[51978]: pam_unix(sudo:session): session closed for user root
Dec 10 19:30:52 compute-0 sshd-session[45085]: Connection closed by 192.168.122.30 port 34074
Dec 10 19:30:52 compute-0 sshd-session[45082]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:30:52 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Dec 10 19:30:52 compute-0 systemd[1]: session-11.scope: Consumed 2min 38.215s CPU time.
Dec 10 19:30:52 compute-0 systemd-logind[789]: Session 11 logged out. Waiting for processes to exit.
Dec 10 19:30:52 compute-0 systemd-logind[789]: Removed session 11.
Dec 10 19:30:58 compute-0 sshd-session[52242]: Accepted publickey for zuul from 192.168.122.30 port 33134 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:30:58 compute-0 systemd-logind[789]: New session 12 of user zuul.
Dec 10 19:30:58 compute-0 systemd[1]: Started Session 12 of User zuul.
Dec 10 19:30:58 compute-0 sshd-session[52242]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:30:59 compute-0 python3.9[52395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:31:00 compute-0 sudo[52549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrwbfbtyygqzpxtbnnlfrdnpqlofzppa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395059.6899836-36-49707445026015/AnsiballZ_getent.py'
Dec 10 19:31:00 compute-0 sudo[52549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:00 compute-0 python3.9[52551]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 10 19:31:00 compute-0 sudo[52549]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:01 compute-0 sudo[52702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqjddiibmpbbqhlefxsaklxmusemdoed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395060.5776029-44-56757675892083/AnsiballZ_group.py'
Dec 10 19:31:01 compute-0 sudo[52702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:01 compute-0 python3.9[52704]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 10 19:31:01 compute-0 groupadd[52705]: group added to /etc/group: name=openvswitch, GID=42476
Dec 10 19:31:01 compute-0 groupadd[52705]: group added to /etc/gshadow: name=openvswitch
Dec 10 19:31:01 compute-0 groupadd[52705]: new group: name=openvswitch, GID=42476
Dec 10 19:31:01 compute-0 sudo[52702]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:02 compute-0 sudo[52860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhsrhkkqdsksjgrxdbrotvyxblnrusjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395061.5473256-52-195463152654185/AnsiballZ_user.py'
Dec 10 19:31:02 compute-0 sudo[52860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:02 compute-0 python3.9[52862]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 10 19:31:02 compute-0 useradd[52864]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 10 19:31:02 compute-0 useradd[52864]: add 'openvswitch' to group 'hugetlbfs'
Dec 10 19:31:02 compute-0 useradd[52864]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 10 19:31:02 compute-0 sudo[52860]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:03 compute-0 sudo[53020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmnuidjbuqtnpvhpluyeqcyhdbxacwvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395062.778968-62-235560572743453/AnsiballZ_setup.py'
Dec 10 19:31:03 compute-0 sudo[53020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:03 compute-0 python3.9[53022]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:31:03 compute-0 sudo[53020]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:04 compute-0 sudo[53104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fphzgysvbdrsdtdaeteffooiejuvociy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395062.778968-62-235560572743453/AnsiballZ_dnf.py'
Dec 10 19:31:04 compute-0 sudo[53104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:04 compute-0 python3.9[53106]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 10 19:31:06 compute-0 sudo[53104]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:06 compute-0 sudo[53266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caupnnujutdewgsysotypihureepprfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395066.2583714-76-168665822966575/AnsiballZ_dnf.py'
Dec 10 19:31:06 compute-0 sudo[53266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:06 compute-0 python3.9[53268]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:31:21 compute-0 kernel: SELinux:  Converting 2733 SID table entries...
Dec 10 19:31:21 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 10 19:31:21 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 10 19:31:21 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 10 19:31:21 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 10 19:31:21 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 10 19:31:21 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 10 19:31:21 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 10 19:31:21 compute-0 groupadd[53292]: group added to /etc/group: name=unbound, GID=993
Dec 10 19:31:21 compute-0 groupadd[53292]: group added to /etc/gshadow: name=unbound
Dec 10 19:31:21 compute-0 groupadd[53292]: new group: name=unbound, GID=993
Dec 10 19:31:21 compute-0 useradd[53299]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 10 19:31:21 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 10 19:31:21 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 10 19:31:22 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 10 19:31:23 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 10 19:31:23 compute-0 systemd[1]: Reloading.
Dec 10 19:31:23 compute-0 systemd-sysv-generator[53798]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:31:23 compute-0 systemd-rc-local-generator[53795]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:31:23 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 10 19:31:23 compute-0 sudo[53266]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:23 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 10 19:31:23 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 10 19:31:23 compute-0 systemd[1]: run-rb46288e10e784545b44dc90756c51719.service: Deactivated successfully.
Dec 10 19:31:24 compute-0 sudo[54364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuvnhazhenfezrmhxtatxdnjrxrofnkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395084.057583-84-137555080513469/AnsiballZ_systemd.py'
Dec 10 19:31:24 compute-0 sudo[54364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:24 compute-0 python3.9[54366]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 10 19:31:25 compute-0 systemd[1]: Reloading.
Dec 10 19:31:25 compute-0 systemd-sysv-generator[54402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:31:25 compute-0 systemd-rc-local-generator[54398]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:31:25 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Dec 10 19:31:25 compute-0 chown[54408]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 10 19:31:25 compute-0 ovs-ctl[54413]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 10 19:31:25 compute-0 ovs-ctl[54413]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 10 19:31:25 compute-0 ovs-ctl[54413]: Starting ovsdb-server [  OK  ]
Dec 10 19:31:25 compute-0 ovs-vsctl[54462]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 10 19:31:25 compute-0 ovs-vsctl[54482]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 10 19:31:25 compute-0 ovs-ctl[54413]: Configuring Open vSwitch system IDs [  OK  ]
Dec 10 19:31:25 compute-0 ovs-ctl[54413]: Enabling remote OVSDB managers [  OK  ]
Dec 10 19:31:25 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Dec 10 19:31:25 compute-0 ovs-vsctl[54488]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 10 19:31:25 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 10 19:31:25 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 10 19:31:25 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 10 19:31:25 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Dec 10 19:31:25 compute-0 ovs-ctl[54533]: Inserting openvswitch module [  OK  ]
Dec 10 19:31:25 compute-0 ovs-ctl[54502]: Starting ovs-vswitchd [  OK  ]
Dec 10 19:31:26 compute-0 ovs-vsctl[54550]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 10 19:31:26 compute-0 ovs-ctl[54502]: Enabling remote OVSDB managers [  OK  ]
Dec 10 19:31:26 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 10 19:31:26 compute-0 systemd[1]: Starting Open vSwitch...
Dec 10 19:31:26 compute-0 systemd[1]: Finished Open vSwitch.
Dec 10 19:31:26 compute-0 sudo[54364]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:26 compute-0 python3.9[54702]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:31:27 compute-0 sudo[54852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjegcadfkthxnucdjdrvvjugvuhkkybt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395087.137914-102-252174354118125/AnsiballZ_sefcontext.py'
Dec 10 19:31:27 compute-0 sudo[54852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:27 compute-0 python3.9[54854]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 10 19:31:28 compute-0 kernel: SELinux:  Converting 2747 SID table entries...
Dec 10 19:31:28 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 10 19:31:28 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 10 19:31:28 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 10 19:31:28 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 10 19:31:28 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 10 19:31:28 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 10 19:31:28 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 10 19:31:29 compute-0 sudo[54852]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:29 compute-0 python3.9[55009]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:31:30 compute-0 sudo[55165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exwftbikzlgplykhnbfcqqoqcawxztep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395090.3358772-120-52801230379366/AnsiballZ_dnf.py'
Dec 10 19:31:30 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 10 19:31:30 compute-0 sudo[55165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:30 compute-0 python3.9[55167]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:31:32 compute-0 sudo[55165]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:32 compute-0 sudo[55318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqmmjmjhcwucppnixklkzaotiwocnojj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395092.2601871-128-63156716273437/AnsiballZ_command.py'
Dec 10 19:31:32 compute-0 sudo[55318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:32 compute-0 python3.9[55320]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:31:33 compute-0 sudo[55318]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:34 compute-0 sudo[55605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dakjlnexgjrmqzeqoovwcuuduqrqjtlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395093.761951-136-173086553973217/AnsiballZ_file.py'
Dec 10 19:31:34 compute-0 sudo[55605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:34 compute-0 python3.9[55607]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 10 19:31:34 compute-0 sudo[55605]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:35 compute-0 python3.9[55757]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:31:35 compute-0 sudo[55909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pentzecrdeswcifkdyjizkqxlnxfguur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395095.3445861-152-101532948021285/AnsiballZ_dnf.py'
Dec 10 19:31:35 compute-0 sudo[55909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:35 compute-0 python3.9[55911]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:31:37 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 10 19:31:37 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 10 19:31:38 compute-0 systemd[1]: Reloading.
Dec 10 19:31:38 compute-0 systemd-rc-local-generator[55949]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:31:38 compute-0 systemd-sysv-generator[55952]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:31:38 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 10 19:31:38 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 10 19:31:38 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 10 19:31:38 compute-0 systemd[1]: run-r5fb731245d0b49cfa612c38f48588d4e.service: Deactivated successfully.
Dec 10 19:31:38 compute-0 sudo[55909]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:39 compute-0 sudo[56227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihdmtzmhjdenksnuzpgpamxnadgaphpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395098.7255263-160-104375323091507/AnsiballZ_systemd.py'
Dec 10 19:31:39 compute-0 sudo[56227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:39 compute-0 python3.9[56229]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:31:39 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 10 19:31:39 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Dec 10 19:31:39 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Dec 10 19:31:39 compute-0 systemd[1]: Stopping Network Manager...
Dec 10 19:31:39 compute-0 NetworkManager[7185]: <info>  [1765395099.3835] caught SIGTERM, shutting down normally.
Dec 10 19:31:39 compute-0 NetworkManager[7185]: <info>  [1765395099.3862] dhcp4 (eth0): canceled DHCP transaction
Dec 10 19:31:39 compute-0 NetworkManager[7185]: <info>  [1765395099.3863] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 10 19:31:39 compute-0 NetworkManager[7185]: <info>  [1765395099.3863] dhcp4 (eth0): state changed no lease
Dec 10 19:31:39 compute-0 NetworkManager[7185]: <info>  [1765395099.3868] manager: NetworkManager state is now CONNECTED_SITE
Dec 10 19:31:39 compute-0 NetworkManager[7185]: <info>  [1765395099.3947] exiting (success)
Dec 10 19:31:39 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 10 19:31:39 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 10 19:31:39 compute-0 systemd[1]: Stopped Network Manager.
Dec 10 19:31:39 compute-0 systemd[1]: NetworkManager.service: Consumed 14.577s CPU time, 4.1M memory peak, read 0B from disk, written 25.5K to disk.
Dec 10 19:31:39 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 10 19:31:39 compute-0 systemd[1]: Starting Network Manager...
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.4520] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:94b3788e-ef1c-48b5-bcf4-2732c1663990)
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.4523] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.4589] manager[0x562a4935c000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 10 19:31:39 compute-0 systemd[1]: Starting Hostname Service...
Dec 10 19:31:39 compute-0 systemd[1]: Started Hostname Service.
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5477] hostname: hostname: using hostnamed
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5478] hostname: static hostname changed from (none) to "compute-0"
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5482] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5486] manager[0x562a4935c000]: rfkill: Wi-Fi hardware radio set enabled
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5487] manager[0x562a4935c000]: rfkill: WWAN hardware radio set enabled
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5506] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5514] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5515] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5515] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5516] manager: Networking is enabled by state file
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5519] settings: Loaded settings plugin: keyfile (internal)
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5522] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5544] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5551] dhcp: init: Using DHCP client 'internal'
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5553] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5557] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5561] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5567] device (lo): Activation: starting connection 'lo' (f2373871-aaf0-4c91-b3c1-62ecfbed22d7)
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5572] device (eth0): carrier: link connected
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5576] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5578] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5579] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5583] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5588] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5592] device (eth1): carrier: link connected
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5595] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5599] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (c9d23b9c-193f-548f-8fcf-eba6bd4e3cbf) (indicated)
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5599] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5602] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5606] device (eth1): Activation: starting connection 'ci-private-network' (c9d23b9c-193f-548f-8fcf-eba6bd4e3cbf)
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5610] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 10 19:31:39 compute-0 systemd[1]: Started Network Manager.
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5617] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5618] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5619] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5621] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5623] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5624] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5626] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5629] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5635] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5638] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5645] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5657] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5668] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5670] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5675] device (lo): Activation: successful, device activated.
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5682] dhcp4 (eth0): state changed new lease, address=38.102.83.158
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5691] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 10 19:31:39 compute-0 systemd[1]: Starting Network Manager Wait Online...
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5770] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5779] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5786] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5790] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5793] device (eth1): Activation: successful, device activated.
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5816] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5818] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5822] manager: NetworkManager state is now CONNECTED_SITE
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5824] device (eth0): Activation: successful, device activated.
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5830] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 10 19:31:39 compute-0 NetworkManager[56238]: <info>  [1765395099.5833] manager: startup complete
Dec 10 19:31:39 compute-0 sudo[56227]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:39 compute-0 systemd[1]: Finished Network Manager Wait Online.
Dec 10 19:31:40 compute-0 sudo[56453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjjgkoltumvgromlkhprknrgjqvpkupn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395099.7681932-168-280699858308845/AnsiballZ_dnf.py'
Dec 10 19:31:40 compute-0 sudo[56453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:40 compute-0 python3.9[56455]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:31:45 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 10 19:31:45 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 10 19:31:45 compute-0 systemd[1]: Reloading.
Dec 10 19:31:45 compute-0 systemd-sysv-generator[56512]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:31:45 compute-0 systemd-rc-local-generator[56505]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:31:45 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 10 19:31:46 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 10 19:31:46 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 10 19:31:46 compute-0 systemd[1]: run-rf86d28f9a2b8449b83f2b3f87ebca31d.service: Deactivated successfully.
Dec 10 19:31:46 compute-0 sudo[56453]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:46 compute-0 sudo[56911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpxidlajeewbvvgihjtagxvtsxshtwop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395106.6329372-180-181393755617466/AnsiballZ_stat.py'
Dec 10 19:31:46 compute-0 sudo[56911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:47 compute-0 python3.9[56913]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:31:47 compute-0 sudo[56911]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:47 compute-0 sudo[57063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgdbyzyxppclarjrknhlqrseertoszvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395107.403279-189-220526994079341/AnsiballZ_ini_file.py'
Dec 10 19:31:47 compute-0 sudo[57063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:47 compute-0 python3.9[57065]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:31:48 compute-0 sudo[57063]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:48 compute-0 sudo[57217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzkuoektpjiczboykrhqtxhpykgyrgsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395108.2679684-199-213385400454143/AnsiballZ_ini_file.py'
Dec 10 19:31:48 compute-0 sudo[57217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:48 compute-0 python3.9[57219]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:31:48 compute-0 sudo[57217]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:49 compute-0 sudo[57369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgcvbpiklcibbhvodqwgcqujkddoocig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395108.8581462-199-49281365880831/AnsiballZ_ini_file.py'
Dec 10 19:31:49 compute-0 sudo[57369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:49 compute-0 python3.9[57371]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:31:49 compute-0 sudo[57369]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:49 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 10 19:31:49 compute-0 sudo[57521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqeqinvvdpwtdqzmptxrnsqsiiiczfpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395109.5506094-214-83289093157078/AnsiballZ_ini_file.py'
Dec 10 19:31:49 compute-0 sudo[57521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:49 compute-0 python3.9[57523]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:31:49 compute-0 sudo[57521]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:50 compute-0 sudo[57673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hojzjmlkzmcrkbbxnhbnonlcppoontpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395110.121318-214-130841163540400/AnsiballZ_ini_file.py'
Dec 10 19:31:50 compute-0 sudo[57673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:50 compute-0 python3.9[57675]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:31:50 compute-0 sudo[57673]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:51 compute-0 sudo[57825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uodnzfvxspxthpnrfbhkcbrgqeqgusyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395110.7479103-229-276914754584761/AnsiballZ_stat.py'
Dec 10 19:31:51 compute-0 sudo[57825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:51 compute-0 python3.9[57827]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:31:51 compute-0 sudo[57825]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:51 compute-0 sudo[57948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkwjmgivmbhvmcvwdiwbwhgfldtrthvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395110.7479103-229-276914754584761/AnsiballZ_copy.py'
Dec 10 19:31:51 compute-0 sudo[57948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:51 compute-0 python3.9[57950]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395110.7479103-229-276914754584761/.source _original_basename=.ueytxygk follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:31:51 compute-0 sudo[57948]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:52 compute-0 sudo[58100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdeaynzhowbxaorxjlmdgtwajufyktnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395112.0237603-244-22187289500668/AnsiballZ_file.py'
Dec 10 19:31:52 compute-0 sudo[58100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:52 compute-0 python3.9[58102]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:31:52 compute-0 sudo[58100]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:53 compute-0 sudo[58252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzgzvhbisjrqcyaqztgnlxizhbegvwvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395112.6646345-252-216742925656001/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 10 19:31:53 compute-0 sudo[58252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:53 compute-0 python3.9[58254]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 10 19:31:53 compute-0 sudo[58252]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:53 compute-0 sudo[58404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxvblxuoxxnfjwbtxoupfkvpgllqvlbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395113.5900352-261-43270966691274/AnsiballZ_file.py'
Dec 10 19:31:53 compute-0 sudo[58404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:54 compute-0 python3.9[58406]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:31:54 compute-0 sudo[58404]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:54 compute-0 sudo[58556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjbajhciolzxcbuwcvrlyccxgunqxqeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395114.4257457-271-239871618263154/AnsiballZ_stat.py'
Dec 10 19:31:54 compute-0 sudo[58556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:54 compute-0 sudo[58556]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:55 compute-0 sudo[58679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efmywgocnafycyxkeqywufxfwwvxsroj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395114.4257457-271-239871618263154/AnsiballZ_copy.py'
Dec 10 19:31:55 compute-0 sudo[58679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:55 compute-0 sudo[58679]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:56 compute-0 sudo[58831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpyvwqooxhbqfmlbemjwwiuwxgfivdzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395115.7186222-286-139708401336504/AnsiballZ_slurp.py'
Dec 10 19:31:56 compute-0 sudo[58831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:56 compute-0 python3.9[58833]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 10 19:31:56 compute-0 sudo[58831]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:57 compute-0 sudo[59006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbjtpbvvogzpapnqymxhvxjlauhvuste ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395116.519177-295-280113470852276/async_wrapper.py j523232812701 300 /home/zuul/.ansible/tmp/ansible-tmp-1765395116.519177-295-280113470852276/AnsiballZ_edpm_os_net_config.py _'
Dec 10 19:31:57 compute-0 sudo[59006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:31:57 compute-0 ansible-async_wrapper.py[59008]: Invoked with j523232812701 300 /home/zuul/.ansible/tmp/ansible-tmp-1765395116.519177-295-280113470852276/AnsiballZ_edpm_os_net_config.py _
Dec 10 19:31:57 compute-0 ansible-async_wrapper.py[59011]: Starting module and watcher
Dec 10 19:31:57 compute-0 ansible-async_wrapper.py[59011]: Start watching 59012 (300)
Dec 10 19:31:57 compute-0 ansible-async_wrapper.py[59012]: Start module (59012)
Dec 10 19:31:57 compute-0 ansible-async_wrapper.py[59008]: Return async_wrapper task started.
Dec 10 19:31:57 compute-0 sudo[59006]: pam_unix(sudo:session): session closed for user root
Dec 10 19:31:57 compute-0 python3.9[59013]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 10 19:31:58 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 10 19:31:58 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 10 19:31:58 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 10 19:31:58 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 10 19:31:58 compute-0 kernel: cfg80211: failed to load regulatory.db
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3092] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3116] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3740] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3742] audit: op="connection-add" uuid="de2c9f40-3bf2-4432-aee1-6ee57ddba58d" name="br-ex-br" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3759] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3761] audit: op="connection-add" uuid="9ecc051f-4b3d-434e-8698-c7d6c0f78c06" name="br-ex-port" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3771] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3772] audit: op="connection-add" uuid="0fce8da0-1b08-4c17-a2be-c4371d90a6d6" name="eth1-port" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3782] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3784] audit: op="connection-add" uuid="463b319c-7c85-4976-b170-3244f1f8d072" name="vlan20-port" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3793] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3795] audit: op="connection-add" uuid="9c74722f-ca10-467e-ac49-abe079662185" name="vlan21-port" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3805] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3806] audit: op="connection-add" uuid="e4a4a04f-7a77-4c61-aef6-1d000c7bbe4b" name="vlan22-port" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3824] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3838] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3839] audit: op="connection-add" uuid="e0904bdf-f936-41ad-be70-4c5e48b92567" name="br-ex-if" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3919] audit: op="connection-update" uuid="c9d23b9c-193f-548f-8fcf-eba6bd4e3cbf" name="ci-private-network" args="ovs-interface.type,ipv6.routing-rules,ipv6.routes,ipv6.addr-gen-mode,ipv6.dns,ipv6.addresses,ipv6.method,ovs-external-ids.data,connection.master,connection.controller,connection.slave-type,connection.port-type,connection.timestamp,ipv4.routing-rules,ipv4.never-default,ipv4.routes,ipv4.dns,ipv4.method,ipv4.addresses" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3937] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3938] audit: op="connection-add" uuid="9db20ada-f824-4b36-b70a-4b01f5937706" name="vlan20-if" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3952] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3953] audit: op="connection-add" uuid="a3d4b3cd-7336-4a2d-96bf-e39909979673" name="vlan21-if" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3967] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3968] audit: op="connection-add" uuid="b3e10535-278d-475d-8ae7-2dd25108486c" name="vlan22-if" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3978] audit: op="connection-delete" uuid="a1c87774-911c-3957-85ba-28b52f665aa6" name="Wired connection 1" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3988] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <warn>  [1765395119.3990] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3994] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3997] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (de2c9f40-3bf2-4432-aee1-6ee57ddba58d)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3998] audit: op="connection-activate" uuid="de2c9f40-3bf2-4432-aee1-6ee57ddba58d" name="br-ex-br" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.3999] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <warn>  [1765395119.3999] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4002] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4005] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (9ecc051f-4b3d-434e-8698-c7d6c0f78c06)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4007] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <warn>  [1765395119.4007] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4010] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4013] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (0fce8da0-1b08-4c17-a2be-c4371d90a6d6)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4014] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <warn>  [1765395119.4015] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4018] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4021] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (463b319c-7c85-4976-b170-3244f1f8d072)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4022] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <warn>  [1765395119.4023] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4026] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4028] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (9c74722f-ca10-467e-ac49-abe079662185)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4030] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <warn>  [1765395119.4030] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4033] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4036] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (e4a4a04f-7a77-4c61-aef6-1d000c7bbe4b)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4037] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4038] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4039] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4044] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <warn>  [1765395119.4044] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4046] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4049] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (e0904bdf-f936-41ad-be70-4c5e48b92567)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4049] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4051] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4052] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4053] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4053] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4060] device (eth1): disconnecting for new activation request.
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4060] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4062] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4063] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4064] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4065] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <warn>  [1765395119.4066] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4068] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4070] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (9db20ada-f824-4b36-b70a-4b01f5937706)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4071] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4072] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4073] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4074] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4076] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <warn>  [1765395119.4076] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4078] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4080] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (a3d4b3cd-7336-4a2d-96bf-e39909979673)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4081] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4082] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4083] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4084] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4086] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <warn>  [1765395119.4086] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4088] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4091] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (b3e10535-278d-475d-8ae7-2dd25108486c)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4091] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4093] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4094] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4094] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4095] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4110] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,connection.autoconnect-priority,802-3-ethernet.mtu" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4111] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4114] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4116] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4128] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4131] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4135] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4138] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4139] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4143] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4146] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4149] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4151] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 kernel: ovs-system: entered promiscuous mode
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4157] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4161] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4165] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4167] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4173] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 kernel: Timeout policy base is empty
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4180] dhcp4 (eth0): canceled DHCP transaction
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4180] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4180] dhcp4 (eth0): state changed no lease
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4182] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4193] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4196] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59014 uid=0 result="fail" reason="Device is not activated"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4202] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 10 19:31:59 compute-0 systemd-udevd[59018]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4235] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4240] dhcp4 (eth0): state changed new lease, address=38.102.83.158
Dec 10 19:31:59 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4296] device (eth1): disconnecting for new activation request.
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4298] audit: op="connection-activate" uuid="c9d23b9c-193f-548f-8fcf-eba6bd4e3cbf" name="ci-private-network" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4300] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4376] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59014 uid=0 result="success"
Dec 10 19:31:59 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4432] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4448] device (eth1): Activation: starting connection 'ci-private-network' (c9d23b9c-193f-548f-8fcf-eba6bd4e3cbf)
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4454] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4458] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4463] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4472] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4475] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4482] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4484] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4487] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4491] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4496] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4499] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4503] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4505] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4509] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4512] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4516] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4519] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4522] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4524] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 kernel: br-ex: entered promiscuous mode
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4526] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4529] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4533] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4536] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 10 19:31:59 compute-0 kernel: vlan22: entered promiscuous mode
Dec 10 19:31:59 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 10 19:31:59 compute-0 systemd-udevd[59020]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4649] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4662] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4669] device (eth1): Activation: successful, device activated.
Dec 10 19:31:59 compute-0 kernel: vlan21: entered promiscuous mode
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4677] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4693] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4730] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4736] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4740] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 kernel: vlan20: entered promiscuous mode
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4759] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4776] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4821] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4822] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4824] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4828] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4849] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4858] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4873] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4882] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4883] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4886] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4894] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4895] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 10 19:31:59 compute-0 NetworkManager[56238]: <info>  [1765395119.4898] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 10 19:32:00 compute-0 NetworkManager[56238]: <info>  [1765395120.6126] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59014 uid=0 result="success"
Dec 10 19:32:00 compute-0 NetworkManager[56238]: <info>  [1765395120.7492] checkpoint[0x562a49331950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 10 19:32:00 compute-0 NetworkManager[56238]: <info>  [1765395120.7498] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59014 uid=0 result="success"
Dec 10 19:32:01 compute-0 sudo[59346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylbijnmfxihfswaicyfvkogxcfbvvesu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395120.5374072-295-107409149226590/AnsiballZ_async_status.py'
Dec 10 19:32:01 compute-0 sudo[59346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:01 compute-0 NetworkManager[56238]: <info>  [1765395121.0648] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59014 uid=0 result="success"
Dec 10 19:32:01 compute-0 NetworkManager[56238]: <info>  [1765395121.0661] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59014 uid=0 result="success"
Dec 10 19:32:01 compute-0 python3.9[59348]: ansible-ansible.legacy.async_status Invoked with jid=j523232812701.59008 mode=status _async_dir=/root/.ansible_async
Dec 10 19:32:01 compute-0 sudo[59346]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:01 compute-0 NetworkManager[56238]: <info>  [1765395121.3089] audit: op="networking-control" arg="global-dns-configuration" pid=59014 uid=0 result="success"
Dec 10 19:32:01 compute-0 NetworkManager[56238]: <info>  [1765395121.3126] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 10 19:32:01 compute-0 NetworkManager[56238]: <info>  [1765395121.3160] audit: op="networking-control" arg="global-dns-configuration" pid=59014 uid=0 result="success"
Dec 10 19:32:01 compute-0 NetworkManager[56238]: <info>  [1765395121.3587] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59014 uid=0 result="success"
Dec 10 19:32:01 compute-0 NetworkManager[56238]: <info>  [1765395121.4954] checkpoint[0x562a49331a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 10 19:32:01 compute-0 NetworkManager[56238]: <info>  [1765395121.4958] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59014 uid=0 result="success"
Dec 10 19:32:01 compute-0 ansible-async_wrapper.py[59012]: Module complete (59012)
Dec 10 19:32:02 compute-0 ansible-async_wrapper.py[59011]: Done in kid B.
Dec 10 19:32:04 compute-0 sudo[59450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikamvuvtwqrklajgpendzcipjepsbndh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395120.5374072-295-107409149226590/AnsiballZ_async_status.py'
Dec 10 19:32:04 compute-0 sudo[59450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:04 compute-0 python3.9[59452]: ansible-ansible.legacy.async_status Invoked with jid=j523232812701.59008 mode=status _async_dir=/root/.ansible_async
Dec 10 19:32:04 compute-0 sudo[59450]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:05 compute-0 sudo[59550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bulmvdoocuqgvicwwigtptukeffqujjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395120.5374072-295-107409149226590/AnsiballZ_async_status.py'
Dec 10 19:32:05 compute-0 sudo[59550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:05 compute-0 python3.9[59552]: ansible-ansible.legacy.async_status Invoked with jid=j523232812701.59008 mode=cleanup _async_dir=/root/.ansible_async
Dec 10 19:32:05 compute-0 sudo[59550]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:05 compute-0 sudo[59702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcglprvayconcstjqdezjpogiaxltaii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395125.5545394-322-268479867878608/AnsiballZ_stat.py'
Dec 10 19:32:05 compute-0 sudo[59702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:06 compute-0 python3.9[59704]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:32:06 compute-0 sudo[59702]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:06 compute-0 sudo[59825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqikgyzgbbegqtyuwpihqhhomalhevrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395125.5545394-322-268479867878608/AnsiballZ_copy.py'
Dec 10 19:32:06 compute-0 sudo[59825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:06 compute-0 python3.9[59827]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395125.5545394-322-268479867878608/.source.returncode _original_basename=.crpwv8js follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:32:06 compute-0 sudo[59825]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:07 compute-0 sudo[59977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvhsrwfyfkktiieyzxqxkwtvpbgktsll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395126.8697505-338-109239440665858/AnsiballZ_stat.py'
Dec 10 19:32:07 compute-0 sudo[59977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:07 compute-0 python3.9[59979]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:32:07 compute-0 sudo[59977]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:07 compute-0 sudo[60100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjkheaxmnwjprhpdikfwrhhjnvywslps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395126.8697505-338-109239440665858/AnsiballZ_copy.py'
Dec 10 19:32:07 compute-0 sudo[60100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:07 compute-0 python3.9[60102]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395126.8697505-338-109239440665858/.source.cfg _original_basename=.17fj14gk follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:32:07 compute-0 sudo[60100]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:08 compute-0 sudo[60253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dshwcrmgcdanlvhdoivqgcrluzeooijr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395128.1143055-353-109953149119913/AnsiballZ_systemd.py'
Dec 10 19:32:08 compute-0 sudo[60253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:08 compute-0 python3.9[60255]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:32:08 compute-0 systemd[1]: Reloading Network Manager...
Dec 10 19:32:08 compute-0 NetworkManager[56238]: <info>  [1765395128.8287] audit: op="reload" arg="0" pid=60259 uid=0 result="success"
Dec 10 19:32:08 compute-0 NetworkManager[56238]: <info>  [1765395128.8298] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 10 19:32:08 compute-0 systemd[1]: Reloaded Network Manager.
Dec 10 19:32:08 compute-0 sudo[60253]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:09 compute-0 sshd-session[52245]: Connection closed by 192.168.122.30 port 33134
Dec 10 19:32:09 compute-0 sshd-session[52242]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:32:09 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Dec 10 19:32:09 compute-0 systemd[1]: session-12.scope: Consumed 52.720s CPU time.
Dec 10 19:32:09 compute-0 systemd-logind[789]: Session 12 logged out. Waiting for processes to exit.
Dec 10 19:32:09 compute-0 systemd-logind[789]: Removed session 12.
Dec 10 19:32:09 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 10 19:32:14 compute-0 sshd-session[60292]: Accepted publickey for zuul from 192.168.122.30 port 46754 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:32:14 compute-0 systemd-logind[789]: New session 13 of user zuul.
Dec 10 19:32:15 compute-0 systemd[1]: Started Session 13 of User zuul.
Dec 10 19:32:15 compute-0 sshd-session[60292]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:32:16 compute-0 python3.9[60445]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:32:17 compute-0 python3.9[60599]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:32:18 compute-0 python3.9[60789]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:32:18 compute-0 sshd-session[60295]: Connection closed by 192.168.122.30 port 46754
Dec 10 19:32:18 compute-0 sshd-session[60292]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:32:18 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Dec 10 19:32:18 compute-0 systemd[1]: session-13.scope: Consumed 2.545s CPU time.
Dec 10 19:32:18 compute-0 systemd-logind[789]: Session 13 logged out. Waiting for processes to exit.
Dec 10 19:32:18 compute-0 systemd-logind[789]: Removed session 13.
Dec 10 19:32:18 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 10 19:32:24 compute-0 sshd-session[60819]: Accepted publickey for zuul from 192.168.122.30 port 48696 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:32:24 compute-0 systemd-logind[789]: New session 14 of user zuul.
Dec 10 19:32:24 compute-0 systemd[1]: Started Session 14 of User zuul.
Dec 10 19:32:24 compute-0 sshd-session[60819]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:32:25 compute-0 python3.9[60972]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:32:26 compute-0 python3.9[61126]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:32:27 compute-0 sudo[61281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydkgblluvyrocitrecrdqyixmwwyluda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395147.2789636-40-280429442342760/AnsiballZ_setup.py'
Dec 10 19:32:27 compute-0 sudo[61281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:27 compute-0 python3.9[61283]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:32:28 compute-0 sudo[61281]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:28 compute-0 sudo[61365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpoaczouxqiyseeixowfghyjhhzbwdcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395147.2789636-40-280429442342760/AnsiballZ_dnf.py'
Dec 10 19:32:28 compute-0 sudo[61365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:28 compute-0 python3.9[61367]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:32:30 compute-0 sudo[61365]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:30 compute-0 sudo[61519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctuyxbyvupttrxjmcjunsextuqxistqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395150.323963-52-260257679483165/AnsiballZ_setup.py'
Dec 10 19:32:30 compute-0 sudo[61519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:30 compute-0 python3.9[61521]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:32:31 compute-0 sudo[61519]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:31 compute-0 sudo[61710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amcvpowyditkixoilblxvskgyczihsfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395151.4488893-63-161960426791682/AnsiballZ_file.py'
Dec 10 19:32:31 compute-0 sudo[61710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:32 compute-0 python3.9[61712]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:32:32 compute-0 sudo[61710]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:32 compute-0 sudo[61862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bixhvsplqvwbryoddmkeyzvvoddwjwcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395152.2753465-71-226322070222159/AnsiballZ_command.py'
Dec 10 19:32:32 compute-0 sudo[61862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:32 compute-0 python3.9[61864]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:32:32 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:32:33 compute-0 sudo[61862]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:33 compute-0 sudo[62025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwwuzaciofwbvtagxkgfncjdfciswsrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395153.2022765-79-100577125069346/AnsiballZ_stat.py'
Dec 10 19:32:33 compute-0 sudo[62025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:33 compute-0 python3.9[62027]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:32:33 compute-0 sudo[62025]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:34 compute-0 sudo[62103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abfcutbfhtqcqrudtospugmzgojheufa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395153.2022765-79-100577125069346/AnsiballZ_file.py'
Dec 10 19:32:34 compute-0 sudo[62103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:34 compute-0 python3.9[62105]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:32:34 compute-0 sudo[62103]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:34 compute-0 sudo[62255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orjxduogwnhlazwohmupditupwwplyyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395154.5712745-91-253819255436558/AnsiballZ_stat.py'
Dec 10 19:32:34 compute-0 sudo[62255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:35 compute-0 python3.9[62257]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:32:35 compute-0 sudo[62255]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:35 compute-0 sudo[62333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acoyeqdldpdktvsbcwgtovakmezztxou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395154.5712745-91-253819255436558/AnsiballZ_file.py'
Dec 10 19:32:35 compute-0 sudo[62333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:35 compute-0 python3.9[62335]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:32:35 compute-0 sudo[62333]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:36 compute-0 sudo[62485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwzqewlcextwdiuaegmeymcqgjiwbkeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395155.858161-104-129155242090599/AnsiballZ_ini_file.py'
Dec 10 19:32:36 compute-0 sudo[62485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:36 compute-0 python3.9[62487]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:32:36 compute-0 sudo[62485]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:36 compute-0 sudo[62637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgccdgsfgacqtzupzmcpoplgejmalfnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395156.6694083-104-35236681993827/AnsiballZ_ini_file.py'
Dec 10 19:32:36 compute-0 sudo[62637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:37 compute-0 python3.9[62639]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:32:37 compute-0 sudo[62637]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:37 compute-0 sudo[62789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxpkyxotqmnrrbpyicvqpziknnlayrnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395157.348915-104-46529781410990/AnsiballZ_ini_file.py'
Dec 10 19:32:37 compute-0 sudo[62789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:37 compute-0 python3.9[62791]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:32:37 compute-0 sudo[62789]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:38 compute-0 sudo[62941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmixivcjwpgqzzyhrhlojlabpxezdhjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395157.992267-104-37461883004877/AnsiballZ_ini_file.py'
Dec 10 19:32:38 compute-0 sudo[62941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:38 compute-0 python3.9[62943]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:32:38 compute-0 sudo[62941]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:38 compute-0 sudo[63093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqxaxtgbddjzplikrqaukjusjnmdwkqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395158.687207-135-248519675442159/AnsiballZ_dnf.py'
Dec 10 19:32:38 compute-0 sudo[63093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:39 compute-0 python3.9[63095]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:32:40 compute-0 sudo[63093]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:41 compute-0 sudo[63246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zycnxdnysiygqzzpojtuqkhtpcizebxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395160.8016057-146-118974681288672/AnsiballZ_setup.py'
Dec 10 19:32:41 compute-0 sudo[63246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:41 compute-0 python3.9[63248]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:32:41 compute-0 sudo[63246]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:41 compute-0 sudo[63400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gduehwplxveftawwmgmyjoadzbzfprtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395161.6621463-154-186074032943733/AnsiballZ_stat.py'
Dec 10 19:32:41 compute-0 sudo[63400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:42 compute-0 python3.9[63402]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:32:42 compute-0 sudo[63400]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:42 compute-0 sudo[63552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euaimimrxexdjsdaoufompbgbmckhdnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395162.3470383-163-62544302486622/AnsiballZ_stat.py'
Dec 10 19:32:42 compute-0 sudo[63552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:42 compute-0 python3.9[63554]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:32:42 compute-0 sudo[63552]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:43 compute-0 sudo[63704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmmdquxffzftsklqgyvfhbytvcrgfaxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395163.04651-173-106679797650052/AnsiballZ_command.py'
Dec 10 19:32:43 compute-0 sudo[63704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:43 compute-0 python3.9[63706]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:32:43 compute-0 sudo[63704]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:44 compute-0 sudo[63857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugxnharqitbbpiyfsysfgzocynrsrzha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395163.7039187-183-26023933117042/AnsiballZ_service_facts.py'
Dec 10 19:32:44 compute-0 sudo[63857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:44 compute-0 python3.9[63859]: ansible-service_facts Invoked
Dec 10 19:32:44 compute-0 network[63876]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 10 19:32:44 compute-0 network[63877]: 'network-scripts' will be removed from distribution in near future.
Dec 10 19:32:44 compute-0 network[63878]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 10 19:32:48 compute-0 sudo[63857]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:49 compute-0 sudo[64161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njnjezvszrulioogmubayceokcpzutvg ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765395168.8408988-198-180431446546095/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765395168.8408988-198-180431446546095/args'
Dec 10 19:32:49 compute-0 sudo[64161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:49 compute-0 sudo[64161]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:49 compute-0 sudo[64328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdijaohrbapiqafntszkmbbhyedohlgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395169.689275-209-96187858661832/AnsiballZ_dnf.py'
Dec 10 19:32:49 compute-0 sudo[64328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:50 compute-0 python3.9[64330]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:32:51 compute-0 sudo[64328]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:52 compute-0 sudo[64481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fahcpkvlggykbjtnwaspvtbstqozngpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395171.7992215-222-124024185610084/AnsiballZ_package_facts.py'
Dec 10 19:32:52 compute-0 sudo[64481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:52 compute-0 python3.9[64483]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 10 19:32:52 compute-0 sudo[64481]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:53 compute-0 sudo[64633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdsxudvpynoturbilcatrkmqtxmvvtnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395173.3611405-232-240466697210467/AnsiballZ_stat.py'
Dec 10 19:32:53 compute-0 sudo[64633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:53 compute-0 python3.9[64635]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:32:54 compute-0 sudo[64633]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:54 compute-0 sudo[64758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkdsrcnzoqryqiwiendokbgtsfjhopwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395173.3611405-232-240466697210467/AnsiballZ_copy.py'
Dec 10 19:32:54 compute-0 sudo[64758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:54 compute-0 python3.9[64760]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395173.3611405-232-240466697210467/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:32:54 compute-0 sudo[64758]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:55 compute-0 sudo[64912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tprdldryjtuhihabndkriafehzlhcunl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395175.0797293-247-159529894014635/AnsiballZ_stat.py'
Dec 10 19:32:55 compute-0 sudo[64912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:55 compute-0 python3.9[64914]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:32:55 compute-0 sudo[64912]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:56 compute-0 sudo[65037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwklmmrntefmzwbfttvileosuvgnfbux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395175.0797293-247-159529894014635/AnsiballZ_copy.py'
Dec 10 19:32:56 compute-0 sudo[65037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:56 compute-0 python3.9[65039]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395175.0797293-247-159529894014635/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:32:56 compute-0 sudo[65037]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:57 compute-0 sudo[65191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzrdxthrhxpsfuzapjxrbmqyffmqepug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395176.7751331-268-178493804620410/AnsiballZ_lineinfile.py'
Dec 10 19:32:57 compute-0 sudo[65191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:57 compute-0 python3.9[65193]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:32:57 compute-0 sudo[65191]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:58 compute-0 sudo[65345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puvucieyopniinyandbzskblutgzhzsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395178.009525-283-23770080442207/AnsiballZ_setup.py'
Dec 10 19:32:58 compute-0 sudo[65345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:58 compute-0 python3.9[65347]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:32:58 compute-0 sudo[65345]: pam_unix(sudo:session): session closed for user root
Dec 10 19:32:59 compute-0 sudo[65429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klpwxfaskyekpbdptrqopjepourtrate ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395178.009525-283-23770080442207/AnsiballZ_systemd.py'
Dec 10 19:32:59 compute-0 sudo[65429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:32:59 compute-0 python3.9[65431]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:32:59 compute-0 sudo[65429]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:00 compute-0 sudo[65583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxnzukipzweuwviiqhlqkaxjxakuimlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395180.2354128-299-86916759989470/AnsiballZ_setup.py'
Dec 10 19:33:00 compute-0 sudo[65583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:00 compute-0 python3.9[65585]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:33:01 compute-0 sudo[65583]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:01 compute-0 sudo[65667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbgorxuzotlptakwpkcygzfuumjrsear ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395180.2354128-299-86916759989470/AnsiballZ_systemd.py'
Dec 10 19:33:01 compute-0 sudo[65667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:01 compute-0 python3.9[65669]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:33:01 compute-0 chronyd[792]: chronyd exiting
Dec 10 19:33:01 compute-0 systemd[1]: Stopping NTP client/server...
Dec 10 19:33:01 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Dec 10 19:33:01 compute-0 systemd[1]: Stopped NTP client/server.
Dec 10 19:33:01 compute-0 systemd[1]: Starting NTP client/server...
Dec 10 19:33:01 compute-0 chronyd[65677]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 10 19:33:01 compute-0 chronyd[65677]: Frequency -26.275 +/- 0.352 ppm read from /var/lib/chrony/drift
Dec 10 19:33:01 compute-0 chronyd[65677]: Loaded seccomp filter (level 2)
Dec 10 19:33:01 compute-0 systemd[1]: Started NTP client/server.
Dec 10 19:33:01 compute-0 sudo[65667]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:02 compute-0 sshd-session[60822]: Connection closed by 192.168.122.30 port 48696
Dec 10 19:33:02 compute-0 sshd-session[60819]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:33:02 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Dec 10 19:33:02 compute-0 systemd[1]: session-14.scope: Consumed 27.336s CPU time.
Dec 10 19:33:02 compute-0 systemd-logind[789]: Session 14 logged out. Waiting for processes to exit.
Dec 10 19:33:02 compute-0 systemd-logind[789]: Removed session 14.
Dec 10 19:33:07 compute-0 sshd-session[65703]: Accepted publickey for zuul from 192.168.122.30 port 34246 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:33:07 compute-0 systemd-logind[789]: New session 15 of user zuul.
Dec 10 19:33:07 compute-0 systemd[1]: Started Session 15 of User zuul.
Dec 10 19:33:07 compute-0 sshd-session[65703]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:33:08 compute-0 python3.9[65856]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:33:09 compute-0 sudo[66010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyhahahmgvimwvbdjmlsjuhhjpivqmse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395189.1089036-33-144303657547468/AnsiballZ_file.py'
Dec 10 19:33:09 compute-0 sudo[66010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:09 compute-0 python3.9[66012]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:09 compute-0 sudo[66010]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:10 compute-0 sudo[66185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoijajjupzbskjseuwlwgwcsovrjumfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395189.9913301-41-273292760012927/AnsiballZ_stat.py'
Dec 10 19:33:10 compute-0 sudo[66185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:10 compute-0 python3.9[66187]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:10 compute-0 sudo[66185]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:11 compute-0 sudo[66263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trqpkandbahypdzjcnyimapbuynaizrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395189.9913301-41-273292760012927/AnsiballZ_file.py'
Dec 10 19:33:11 compute-0 sudo[66263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:11 compute-0 python3.9[66265]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.ait__jo5 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:11 compute-0 sudo[66263]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:11 compute-0 sudo[66415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvoznlvmyzdqfbotuqchvdnxfxnuvohy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395191.6469133-61-218303605665930/AnsiballZ_stat.py'
Dec 10 19:33:11 compute-0 sudo[66415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:12 compute-0 python3.9[66417]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:12 compute-0 sudo[66415]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:12 compute-0 sudo[66538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hptmhlizyflyeqvhijtxokeqgvmpyhpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395191.6469133-61-218303605665930/AnsiballZ_copy.py'
Dec 10 19:33:12 compute-0 sudo[66538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:12 compute-0 python3.9[66540]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395191.6469133-61-218303605665930/.source _original_basename=.rlpgmsiz follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:12 compute-0 sudo[66538]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:13 compute-0 sudo[66690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwzxtgwnozkdvqcddkmfeqvrjgbiglfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395193.1495042-77-157550704086439/AnsiballZ_file.py'
Dec 10 19:33:13 compute-0 sudo[66690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:13 compute-0 python3.9[66692]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:33:13 compute-0 sudo[66690]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:14 compute-0 sudo[66842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujvetllpkwovikbrmjlqdygfugchbpny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395193.8159711-85-92677454892873/AnsiballZ_stat.py'
Dec 10 19:33:14 compute-0 sudo[66842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:14 compute-0 python3.9[66844]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:14 compute-0 sudo[66842]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:14 compute-0 sudo[66965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhnvfcfpdssokfonmjibxnhfpcuppcmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395193.8159711-85-92677454892873/AnsiballZ_copy.py'
Dec 10 19:33:14 compute-0 sudo[66965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:14 compute-0 python3.9[66967]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395193.8159711-85-92677454892873/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:33:15 compute-0 sudo[66965]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:15 compute-0 sudo[67117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbhgoegxzqlktsxcmujpxaoujtqjtwhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395195.1400666-85-151280397283766/AnsiballZ_stat.py'
Dec 10 19:33:15 compute-0 sudo[67117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:15 compute-0 python3.9[67119]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:15 compute-0 sudo[67117]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:16 compute-0 sudo[67240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtkagwngnitmtzbkortyiogbkmgxsgah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395195.1400666-85-151280397283766/AnsiballZ_copy.py'
Dec 10 19:33:16 compute-0 sudo[67240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:16 compute-0 python3.9[67242]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395195.1400666-85-151280397283766/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:33:16 compute-0 sudo[67240]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:16 compute-0 sudo[67392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdyqqtoozfvkhmovxyziugzbnndpqanq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395196.4745421-114-237531679064928/AnsiballZ_file.py'
Dec 10 19:33:16 compute-0 sudo[67392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:17 compute-0 python3.9[67394]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:17 compute-0 sudo[67392]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:17 compute-0 sudo[67544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihinxdhexsvduzcnbkbnztfxchunvsvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395197.2300255-122-203588542648242/AnsiballZ_stat.py'
Dec 10 19:33:17 compute-0 sudo[67544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:17 compute-0 python3.9[67546]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:17 compute-0 sudo[67544]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:18 compute-0 sudo[67667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lljnouywnpawctjmomiwguujetnrngla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395197.2300255-122-203588542648242/AnsiballZ_copy.py'
Dec 10 19:33:18 compute-0 sudo[67667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:18 compute-0 python3.9[67669]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395197.2300255-122-203588542648242/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:18 compute-0 sudo[67667]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:18 compute-0 sudo[67819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzgbacvkisoiijgimeaoikzqvlvcxmen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395198.5197148-137-63096639450335/AnsiballZ_stat.py'
Dec 10 19:33:18 compute-0 sudo[67819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:19 compute-0 python3.9[67821]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:19 compute-0 sudo[67819]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:19 compute-0 sudo[67942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwifxadqxhqxuyuoyldodoxgftbynljm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395198.5197148-137-63096639450335/AnsiballZ_copy.py'
Dec 10 19:33:19 compute-0 sudo[67942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:19 compute-0 python3.9[67944]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395198.5197148-137-63096639450335/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:19 compute-0 sudo[67942]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:20 compute-0 sudo[68095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmpvqfbycseqgfhacryiajmsctcluzan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395199.876219-152-160838818179571/AnsiballZ_systemd.py'
Dec 10 19:33:20 compute-0 sudo[68095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:20 compute-0 python3.9[68097]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:33:20 compute-0 systemd[1]: Reloading.
Dec 10 19:33:20 compute-0 systemd-rc-local-generator[68121]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:33:20 compute-0 systemd-sysv-generator[68126]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:33:21 compute-0 systemd[1]: Reloading.
Dec 10 19:33:21 compute-0 systemd-rc-local-generator[68161]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:33:21 compute-0 systemd-sysv-generator[68164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:33:21 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Dec 10 19:33:21 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Dec 10 19:33:21 compute-0 sudo[68095]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:21 compute-0 sudo[68322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rarezvjnxrgoqnswtbegztqulfebuxjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395201.5641484-160-85046415930773/AnsiballZ_stat.py'
Dec 10 19:33:21 compute-0 sudo[68322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:21 compute-0 python3.9[68324]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:22 compute-0 sudo[68322]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:22 compute-0 sudo[68445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfhkfhfanumloahafsvzalrsrowujybe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395201.5641484-160-85046415930773/AnsiballZ_copy.py'
Dec 10 19:33:22 compute-0 sudo[68445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:22 compute-0 python3.9[68447]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395201.5641484-160-85046415930773/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:22 compute-0 sudo[68445]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:23 compute-0 sudo[68597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zquvwwddudhedfopxmhkvffwnrdyljah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395202.7395527-175-17947973600927/AnsiballZ_stat.py'
Dec 10 19:33:23 compute-0 sudo[68597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:23 compute-0 python3.9[68599]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:23 compute-0 sudo[68597]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:23 compute-0 sudo[68720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eewdnxkrqnycylwqwqmsfcezjkhifwzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395202.7395527-175-17947973600927/AnsiballZ_copy.py'
Dec 10 19:33:23 compute-0 sudo[68720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:23 compute-0 python3.9[68722]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395202.7395527-175-17947973600927/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:23 compute-0 sudo[68720]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:24 compute-0 sudo[68872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxpcmstqohyxageyiqqwxzjejwhaqtzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395203.9437814-190-2201901513026/AnsiballZ_systemd.py'
Dec 10 19:33:24 compute-0 sudo[68872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:24 compute-0 python3.9[68874]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:33:24 compute-0 systemd[1]: Reloading.
Dec 10 19:33:24 compute-0 systemd-sysv-generator[68906]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:33:24 compute-0 systemd-rc-local-generator[68902]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:33:24 compute-0 systemd[1]: Reloading.
Dec 10 19:33:24 compute-0 systemd-rc-local-generator[68940]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:33:24 compute-0 systemd-sysv-generator[68944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:33:24 compute-0 systemd[1]: Starting Create netns directory...
Dec 10 19:33:25 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 10 19:33:25 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 10 19:33:25 compute-0 systemd[1]: Finished Create netns directory.
Dec 10 19:33:25 compute-0 sudo[68872]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:25 compute-0 python3.9[69101]: ansible-ansible.builtin.service_facts Invoked
Dec 10 19:33:25 compute-0 network[69118]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 10 19:33:25 compute-0 network[69119]: 'network-scripts' will be removed from distribution in near future.
Dec 10 19:33:25 compute-0 network[69120]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 10 19:33:29 compute-0 sudo[69380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biuyvneudwndckhomdzvqxsvdaxkqgxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395209.648834-206-92128216504305/AnsiballZ_systemd.py'
Dec 10 19:33:29 compute-0 sudo[69380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:30 compute-0 python3.9[69382]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:33:30 compute-0 systemd[1]: Reloading.
Dec 10 19:33:30 compute-0 systemd-sysv-generator[69415]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:33:30 compute-0 systemd-rc-local-generator[69412]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:33:30 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 10 19:33:30 compute-0 iptables.init[69422]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 10 19:33:30 compute-0 iptables.init[69422]: iptables: Flushing firewall rules: [  OK  ]
Dec 10 19:33:30 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Dec 10 19:33:30 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 10 19:33:30 compute-0 sudo[69380]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:31 compute-0 sudo[69617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tchxpnddcvpamrsduuzgdvjnybidgzrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395210.997158-206-219890527868686/AnsiballZ_systemd.py'
Dec 10 19:33:31 compute-0 sudo[69617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:31 compute-0 python3.9[69619]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:33:31 compute-0 sudo[69617]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:32 compute-0 sudo[69771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzezavesombxjcqjbitxuurkezsggowl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395211.896214-222-235775063861519/AnsiballZ_systemd.py'
Dec 10 19:33:32 compute-0 sudo[69771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:32 compute-0 python3.9[69773]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:33:32 compute-0 systemd[1]: Reloading.
Dec 10 19:33:32 compute-0 systemd-rc-local-generator[69800]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:33:32 compute-0 systemd-sysv-generator[69803]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:33:32 compute-0 systemd[1]: Starting Netfilter Tables...
Dec 10 19:33:32 compute-0 systemd[1]: Finished Netfilter Tables.
Dec 10 19:33:32 compute-0 sudo[69771]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:33 compute-0 sudo[69963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bondytquzkzxgeozwgpivwqiayjltwwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395213.1015465-230-150572626192046/AnsiballZ_command.py'
Dec 10 19:33:33 compute-0 sudo[69963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:33 compute-0 python3.9[69965]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:33:33 compute-0 sudo[69963]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:34 compute-0 sudo[70116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiyowhtqamiwblxlgkwpaxvndsthpcvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395214.137569-244-77633932398354/AnsiballZ_stat.py'
Dec 10 19:33:34 compute-0 sudo[70116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:34 compute-0 python3.9[70118]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:34 compute-0 sudo[70116]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:34 compute-0 sudo[70241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydqbirtomrvchjgjttupmomvmvpqqsyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395214.137569-244-77633932398354/AnsiballZ_copy.py'
Dec 10 19:33:34 compute-0 sudo[70241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:35 compute-0 python3.9[70243]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395214.137569-244-77633932398354/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:35 compute-0 sudo[70241]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:35 compute-0 sudo[70394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pshhbvwbbnfugmsnlhduivfjjbarldpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395215.3276193-259-156406788581883/AnsiballZ_systemd.py'
Dec 10 19:33:35 compute-0 sudo[70394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:35 compute-0 python3.9[70396]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:33:36 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Dec 10 19:33:36 compute-0 sshd[1004]: Received SIGHUP; restarting.
Dec 10 19:33:36 compute-0 sshd[1004]: Server listening on 0.0.0.0 port 22.
Dec 10 19:33:36 compute-0 sshd[1004]: Server listening on :: port 22.
Dec 10 19:33:36 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Dec 10 19:33:36 compute-0 sudo[70394]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:36 compute-0 sudo[70550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzzyarbifhwsuvzlghmweftnmmwgxjdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395216.2644737-267-216911593489920/AnsiballZ_file.py'
Dec 10 19:33:36 compute-0 sudo[70550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:36 compute-0 python3.9[70552]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:36 compute-0 sudo[70550]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:37 compute-0 sudo[70702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpwgbzmfbqonisfbwttswccgrlihhbbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395216.9699104-275-209229020648869/AnsiballZ_stat.py'
Dec 10 19:33:37 compute-0 sudo[70702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:37 compute-0 python3.9[70704]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:37 compute-0 sudo[70702]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:37 compute-0 sudo[70825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grfbsthkvzlfbozhgillbnacbynfvire ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395216.9699104-275-209229020648869/AnsiballZ_copy.py'
Dec 10 19:33:37 compute-0 sudo[70825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:38 compute-0 python3.9[70827]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395216.9699104-275-209229020648869/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:38 compute-0 sudo[70825]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:38 compute-0 sudo[70977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uotoqkvdimmzxknsztntvavcacafpvtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395218.3271587-293-185919259517228/AnsiballZ_timezone.py'
Dec 10 19:33:38 compute-0 sudo[70977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:38 compute-0 python3.9[70979]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 10 19:33:39 compute-0 systemd[1]: Starting Time & Date Service...
Dec 10 19:33:39 compute-0 systemd[1]: Started Time & Date Service.
Dec 10 19:33:39 compute-0 sudo[70977]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:39 compute-0 sudo[71133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whxhsirvuwcwculecjwknxepjrrcbdoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395219.421417-302-167279455924071/AnsiballZ_file.py'
Dec 10 19:33:39 compute-0 sudo[71133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:39 compute-0 python3.9[71135]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:39 compute-0 sudo[71133]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:40 compute-0 sudo[71285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocqzhflldnotauhieutvgnnichppqqgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395220.0447176-310-59275658029458/AnsiballZ_stat.py'
Dec 10 19:33:40 compute-0 sudo[71285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:40 compute-0 python3.9[71287]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:40 compute-0 sudo[71285]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:40 compute-0 sudo[71408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-supoehoymfkhxbysmevqfpmmitibisvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395220.0447176-310-59275658029458/AnsiballZ_copy.py'
Dec 10 19:33:40 compute-0 sudo[71408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:41 compute-0 python3.9[71410]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395220.0447176-310-59275658029458/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:41 compute-0 sudo[71408]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:41 compute-0 sudo[71560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqyixyosceormnpfpnhyvxqfzvrhrgyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395221.2631786-325-209017103453712/AnsiballZ_stat.py'
Dec 10 19:33:41 compute-0 sudo[71560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:41 compute-0 python3.9[71562]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:41 compute-0 sudo[71560]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:42 compute-0 sudo[71683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlrimkbpztmvbdwdjgbbujqtiafhrtzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395221.2631786-325-209017103453712/AnsiballZ_copy.py'
Dec 10 19:33:42 compute-0 sudo[71683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:42 compute-0 python3.9[71685]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395221.2631786-325-209017103453712/.source.yaml _original_basename=.i6pi0f95 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:42 compute-0 sudo[71683]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:42 compute-0 sudo[71835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biynvdkjnkutpbjqmrctbtxtopvhbooo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395222.4655662-340-20625453459734/AnsiballZ_stat.py'
Dec 10 19:33:42 compute-0 sudo[71835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:42 compute-0 python3.9[71837]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:42 compute-0 sudo[71835]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:43 compute-0 sudo[71958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zftacxgvoyhkdssefozthwohkjzsfxef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395222.4655662-340-20625453459734/AnsiballZ_copy.py'
Dec 10 19:33:43 compute-0 sudo[71958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:43 compute-0 python3.9[71960]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395222.4655662-340-20625453459734/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:43 compute-0 sudo[71958]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:43 compute-0 sudo[72110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyluigijdujdymlnwasenkvhjzfihdxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395223.7215776-355-278990676049233/AnsiballZ_command.py'
Dec 10 19:33:43 compute-0 sudo[72110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:44 compute-0 python3.9[72112]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:33:44 compute-0 sudo[72110]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:44 compute-0 sudo[72263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfcxnbrtkhiddzpcomzfdgiwsflpfcmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395224.4627025-363-230944989032940/AnsiballZ_command.py'
Dec 10 19:33:44 compute-0 sudo[72263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:44 compute-0 python3.9[72265]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:33:44 compute-0 sudo[72263]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:45 compute-0 sudo[72416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxbhqvwtcphwvxwuyearadfwoxpiwqae ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765395225.3806756-371-269244825347573/AnsiballZ_edpm_nftables_from_files.py'
Dec 10 19:33:45 compute-0 sudo[72416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:45 compute-0 python3[72418]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 10 19:33:46 compute-0 sudo[72416]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:46 compute-0 sudo[72568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trxjdegyoqwqvxyniejtyaunqglyinuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395226.1579196-379-12404123418905/AnsiballZ_stat.py'
Dec 10 19:33:46 compute-0 sudo[72568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:46 compute-0 python3.9[72570]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:46 compute-0 sudo[72568]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:47 compute-0 sudo[72691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wefzqmudohibzdqvjmjybobyshvtrsyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395226.1579196-379-12404123418905/AnsiballZ_copy.py'
Dec 10 19:33:47 compute-0 sudo[72691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:47 compute-0 python3.9[72693]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395226.1579196-379-12404123418905/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:47 compute-0 sudo[72691]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:47 compute-0 sudo[72843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vimnemcptaqtgepfgofkvqmjtfyxzgdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395227.5118933-394-105614253024099/AnsiballZ_stat.py'
Dec 10 19:33:47 compute-0 sudo[72843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:48 compute-0 python3.9[72845]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:48 compute-0 sudo[72843]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:48 compute-0 sudo[72966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tamahsrxsifcvipltoqtwtxkparjszki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395227.5118933-394-105614253024099/AnsiballZ_copy.py'
Dec 10 19:33:48 compute-0 sudo[72966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:48 compute-0 python3.9[72968]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395227.5118933-394-105614253024099/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:48 compute-0 sudo[72966]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:49 compute-0 sudo[73118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxkyscdxqpaztmyxjgnycxdanjdqcrbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395228.9547617-409-248620196518055/AnsiballZ_stat.py'
Dec 10 19:33:49 compute-0 sudo[73118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:49 compute-0 python3.9[73120]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:49 compute-0 sudo[73118]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:49 compute-0 sudo[73241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnqhdxwguaujrdvbtxqlmfqoivfqgpss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395228.9547617-409-248620196518055/AnsiballZ_copy.py'
Dec 10 19:33:49 compute-0 sudo[73241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:50 compute-0 python3.9[73243]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395228.9547617-409-248620196518055/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:50 compute-0 sudo[73241]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:50 compute-0 sudo[73393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bchichbzbnrpwnpqxxomqacjsxoeuusb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395230.2881129-424-136205313515163/AnsiballZ_stat.py'
Dec 10 19:33:50 compute-0 sudo[73393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:50 compute-0 python3.9[73395]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:50 compute-0 sudo[73393]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:51 compute-0 sudo[73516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnuxnzydkfelibiiowkcsnadyqqeeydz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395230.2881129-424-136205313515163/AnsiballZ_copy.py'
Dec 10 19:33:51 compute-0 sudo[73516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:51 compute-0 python3.9[73518]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395230.2881129-424-136205313515163/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:51 compute-0 sudo[73516]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:51 compute-0 sudo[73668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxcbafzxzjzcmvsybtflmmpmoezjtjat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395231.5522883-439-156612556709045/AnsiballZ_stat.py'
Dec 10 19:33:51 compute-0 sudo[73668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:52 compute-0 python3.9[73670]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:33:52 compute-0 sudo[73668]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:52 compute-0 sudo[73791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pecjgruxztkyhukhalppjvmqgkxlrkil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395231.5522883-439-156612556709045/AnsiballZ_copy.py'
Dec 10 19:33:52 compute-0 sudo[73791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:52 compute-0 python3.9[73793]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395231.5522883-439-156612556709045/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:52 compute-0 sudo[73791]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:53 compute-0 sudo[73943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ganlbwovrccnqgtfzblindfvnjkusfmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395232.901551-454-234220912398251/AnsiballZ_file.py'
Dec 10 19:33:53 compute-0 sudo[73943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:53 compute-0 python3.9[73945]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:53 compute-0 sudo[73943]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:53 compute-0 sudo[74095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsxzpjvnpqbkkpqjmpnhsliohweduidi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395233.5786498-462-11487076795961/AnsiballZ_command.py'
Dec 10 19:33:53 compute-0 sudo[74095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:54 compute-0 python3.9[74097]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:33:54 compute-0 sudo[74095]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:54 compute-0 sudo[74254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxslddycgabctqpbqonuhffyovzqnust ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395234.2618718-470-145473323491091/AnsiballZ_blockinfile.py'
Dec 10 19:33:54 compute-0 sudo[74254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:54 compute-0 python3.9[74256]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:54 compute-0 sudo[74254]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:55 compute-0 sudo[74407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjejkknduxuugeygizorqwgvcjfzldfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395235.1750388-479-171054488807974/AnsiballZ_file.py'
Dec 10 19:33:55 compute-0 sudo[74407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:55 compute-0 python3.9[74409]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:55 compute-0 sudo[74407]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:56 compute-0 sudo[74559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wofddetrmtuizaypiszogytwkxjkdtoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395235.8151844-479-109611801091378/AnsiballZ_file.py'
Dec 10 19:33:56 compute-0 sudo[74559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:56 compute-0 python3.9[74561]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:33:56 compute-0 sudo[74559]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:56 compute-0 sudo[74711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaiuodwsinrerzccygxeyasqwxzqwhal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395236.4969327-494-125525301970056/AnsiballZ_mount.py'
Dec 10 19:33:56 compute-0 sudo[74711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:57 compute-0 python3.9[74713]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 10 19:33:57 compute-0 sudo[74711]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:57 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 19:33:57 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 19:33:57 compute-0 sudo[74865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cstfiwokxvebgymwumbkjbdqxueekttf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395237.3573606-494-224415927182754/AnsiballZ_mount.py'
Dec 10 19:33:57 compute-0 sudo[74865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:33:57 compute-0 python3.9[74867]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 10 19:33:57 compute-0 sudo[74865]: pam_unix(sudo:session): session closed for user root
Dec 10 19:33:58 compute-0 sshd-session[65706]: Connection closed by 192.168.122.30 port 34246
Dec 10 19:33:58 compute-0 sshd-session[65703]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:33:58 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Dec 10 19:33:58 compute-0 systemd[1]: session-15.scope: Consumed 38.236s CPU time.
Dec 10 19:33:58 compute-0 systemd-logind[789]: Session 15 logged out. Waiting for processes to exit.
Dec 10 19:33:58 compute-0 systemd-logind[789]: Removed session 15.
Dec 10 19:34:03 compute-0 sshd-session[74893]: Accepted publickey for zuul from 192.168.122.30 port 38344 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:34:03 compute-0 systemd-logind[789]: New session 16 of user zuul.
Dec 10 19:34:03 compute-0 systemd[1]: Started Session 16 of User zuul.
Dec 10 19:34:03 compute-0 sshd-session[74893]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:34:03 compute-0 sudo[75046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpzhldxcrmdkkezpibjxngdotodzvqha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395243.45069-16-172215061436875/AnsiballZ_tempfile.py'
Dec 10 19:34:03 compute-0 sudo[75046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:04 compute-0 python3.9[75048]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 10 19:34:04 compute-0 sudo[75046]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:04 compute-0 sudo[75198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plpthfcfywxqyerfmrycfogyuhqhypwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395244.3530707-28-3851466751625/AnsiballZ_stat.py'
Dec 10 19:34:04 compute-0 sudo[75198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:04 compute-0 python3.9[75200]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:34:04 compute-0 sudo[75198]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:05 compute-0 sudo[75350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drjebevvnchigmnfpyvjaxpxznstfusj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395245.1749613-38-61226302961283/AnsiballZ_setup.py'
Dec 10 19:34:05 compute-0 sudo[75350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:05 compute-0 python3.9[75352]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:34:06 compute-0 sudo[75350]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:06 compute-0 sudo[75502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myapqwzforsbgvproyehttjacjueduhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395246.1999545-47-50401162476286/AnsiballZ_blockinfile.py'
Dec 10 19:34:06 compute-0 sudo[75502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:06 compute-0 python3.9[75504]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9O3L952h8nBNnR20n0Kjuslq6zVOKlfIxHlCG7o5kiS/d2+Bm+abaDJ1UkxjLv8EEDYRZfrwmxFhwgtO4Nva6GUfmdH+wwcP6rlaOVWGiGBLLgqsE1My7CmO3lGYqXPkURVqqShqypfAWap5w78H+0qh1Xz7olvncyVm93UEtvk2ZwC0tKcxHy3oZjHD4aJmcM03k53Aa61ccrOy0dSaraJeAulBwWuh8luX3edVgrldrsIkmRedkBoZrOpZ3bJHuEt7Kz3KOsvD4CNxrX5l7r6aY4AWV6Ii/2TDhTEb5Ik1JtMSZ16Gw+Df94XWXgOKaIawj9DvNnXDmUdE6lYyiGStgG0PevBLDCsh8qHeeG6MYPIrMkn7zu5JbGbTfNiew/2osmntLw6pPeSGrKJGPEm44+zta13x+B+szjJJ/hsI3pAdsfomuGELjiGhNI7pRTcJG4v5fziOGehGRaZ5ZAGWNJoDI9nr6+nWdSfLhikL8l+q3PVMOABwwlOtYndk=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICsys/m8B+lVqjOBDhnyx5i7O5qZW8/+8a6Jg/J16S9r
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC0+nYv+wi3zDsZB3Z20SWnd0qMdQPUfwXz+En4GJi/3m2zRslE30DLC5v1aGDe/oh7AkR9Kd2NAEV2G9wczVbQ=
                                             create=True mode=0644 path=/tmp/ansible.xw_95sig state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:34:06 compute-0 sudo[75502]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:07 compute-0 sudo[75654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtdqbaviiumezzkerlrgybnimqiyrhck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395246.987503-55-24365008986960/AnsiballZ_command.py'
Dec 10 19:34:07 compute-0 sudo[75654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:07 compute-0 python3.9[75656]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.xw_95sig' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:34:07 compute-0 sudo[75654]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:08 compute-0 sudo[75808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnuhknkwixpoycajgwjftpmdmvtvqicf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395247.8220577-63-179778604188632/AnsiballZ_file.py'
Dec 10 19:34:08 compute-0 sudo[75808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:08 compute-0 python3.9[75810]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.xw_95sig state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:34:08 compute-0 sudo[75808]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:08 compute-0 sshd-session[74896]: Connection closed by 192.168.122.30 port 38344
Dec 10 19:34:08 compute-0 sshd-session[74893]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:34:08 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Dec 10 19:34:08 compute-0 systemd[1]: session-16.scope: Consumed 3.522s CPU time.
Dec 10 19:34:08 compute-0 systemd-logind[789]: Session 16 logged out. Waiting for processes to exit.
Dec 10 19:34:08 compute-0 systemd-logind[789]: Removed session 16.
Dec 10 19:34:09 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 10 19:34:14 compute-0 sshd-session[75837]: Accepted publickey for zuul from 192.168.122.30 port 45492 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:34:14 compute-0 systemd-logind[789]: New session 17 of user zuul.
Dec 10 19:34:14 compute-0 systemd[1]: Started Session 17 of User zuul.
Dec 10 19:34:14 compute-0 sshd-session[75837]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:34:15 compute-0 python3.9[75990]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:34:16 compute-0 sudo[76144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apwdnxvpczbwnvrzrirbjlowqiwwqlhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395255.8645942-32-68991742645607/AnsiballZ_systemd.py'
Dec 10 19:34:16 compute-0 sudo[76144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:16 compute-0 python3.9[76146]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 10 19:34:16 compute-0 sudo[76144]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:17 compute-0 sudo[76298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuojiyumaofiqdmsujeitkrlaaxpaoeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395256.8841927-40-231700406023701/AnsiballZ_systemd.py'
Dec 10 19:34:17 compute-0 sudo[76298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:17 compute-0 python3.9[76300]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:34:17 compute-0 sudo[76298]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:18 compute-0 sudo[76451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qofgeioyfcketkefewvbireiwogoqlqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395257.66193-49-170012828699696/AnsiballZ_command.py'
Dec 10 19:34:18 compute-0 sudo[76451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:18 compute-0 python3.9[76453]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:34:18 compute-0 sudo[76451]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:18 compute-0 sudo[76604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyefcdwohhzpjgjdubaydttmplewzksj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395258.453922-57-189593827110191/AnsiballZ_stat.py'
Dec 10 19:34:18 compute-0 sudo[76604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:19 compute-0 python3.9[76606]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:34:19 compute-0 sudo[76604]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:19 compute-0 sudo[76758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvwovutfcwoxsqwgjlyrmecphpooimnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395259.3728583-65-236529664076411/AnsiballZ_command.py'
Dec 10 19:34:19 compute-0 sudo[76758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:19 compute-0 python3.9[76760]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:34:19 compute-0 sudo[76758]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:20 compute-0 sudo[76913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhqtzizrfzshbzgdddfxbnvdauysvmqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395259.992107-73-95466508259035/AnsiballZ_file.py'
Dec 10 19:34:20 compute-0 sudo[76913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:20 compute-0 python3.9[76915]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:34:20 compute-0 sudo[76913]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:20 compute-0 sshd-session[75840]: Connection closed by 192.168.122.30 port 45492
Dec 10 19:34:20 compute-0 sshd-session[75837]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:34:20 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 10 19:34:20 compute-0 systemd[1]: session-17.scope: Consumed 4.346s CPU time.
Dec 10 19:34:20 compute-0 systemd-logind[789]: Session 17 logged out. Waiting for processes to exit.
Dec 10 19:34:20 compute-0 systemd-logind[789]: Removed session 17.
Dec 10 19:34:26 compute-0 sshd-session[76940]: Accepted publickey for zuul from 192.168.122.30 port 37290 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:34:26 compute-0 systemd-logind[789]: New session 18 of user zuul.
Dec 10 19:34:26 compute-0 systemd[1]: Started Session 18 of User zuul.
Dec 10 19:34:26 compute-0 sshd-session[76940]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:34:27 compute-0 python3.9[77093]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:34:28 compute-0 sudo[77247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqvwvpuodhnokfdfshbwutuvhjmfhbci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395268.0029902-34-211781522580965/AnsiballZ_setup.py'
Dec 10 19:34:28 compute-0 sudo[77247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:28 compute-0 python3.9[77249]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:34:28 compute-0 sudo[77247]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:29 compute-0 sudo[77331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcwivavyyojqjcxqgyvxipphitnwhejd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395268.0029902-34-211781522580965/AnsiballZ_dnf.py'
Dec 10 19:34:29 compute-0 sudo[77331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:29 compute-0 python3.9[77333]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 10 19:34:30 compute-0 sudo[77331]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:31 compute-0 python3.9[77484]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:34:32 compute-0 python3.9[77635]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 10 19:34:33 compute-0 python3.9[77785]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:34:34 compute-0 python3.9[77935]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:34:34 compute-0 sshd-session[76943]: Connection closed by 192.168.122.30 port 37290
Dec 10 19:34:34 compute-0 sshd-session[76940]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:34:34 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Dec 10 19:34:34 compute-0 systemd[1]: session-18.scope: Consumed 5.947s CPU time.
Dec 10 19:34:34 compute-0 systemd-logind[789]: Session 18 logged out. Waiting for processes to exit.
Dec 10 19:34:34 compute-0 systemd-logind[789]: Removed session 18.
Dec 10 19:34:39 compute-0 sshd-session[77960]: Accepted publickey for zuul from 192.168.122.30 port 48850 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:34:39 compute-0 systemd-logind[789]: New session 19 of user zuul.
Dec 10 19:34:39 compute-0 systemd[1]: Started Session 19 of User zuul.
Dec 10 19:34:40 compute-0 sshd-session[77960]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:34:41 compute-0 python3.9[78113]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:34:42 compute-0 sudo[78267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joqdwbsadunropuyvhvzotqbipsklght ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395282.1326725-50-17586308247/AnsiballZ_file.py'
Dec 10 19:34:42 compute-0 sudo[78267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:42 compute-0 python3.9[78269]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:34:42 compute-0 sudo[78267]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:43 compute-0 sudo[78419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlhesdkrhrbncleuyjcdcbwlkouyzbld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395282.9382174-50-230286525654394/AnsiballZ_file.py'
Dec 10 19:34:43 compute-0 sudo[78419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:43 compute-0 python3.9[78421]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:34:43 compute-0 sudo[78419]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:44 compute-0 sudo[78571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biqcdysfjydgkxnvdydqydoghchnalma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395283.675848-65-180992790538618/AnsiballZ_stat.py'
Dec 10 19:34:44 compute-0 sudo[78571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:44 compute-0 python3.9[78573]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:34:44 compute-0 sudo[78571]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:44 compute-0 sudo[78694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmrjopuaikudsamfatmybqmctgrbbzop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395283.675848-65-180992790538618/AnsiballZ_copy.py'
Dec 10 19:34:44 compute-0 sudo[78694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:44 compute-0 python3.9[78696]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395283.675848-65-180992790538618/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=ca4a0f7dde450908a93e5c59328bb50b51fffcbd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:34:45 compute-0 sudo[78694]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:45 compute-0 sudo[78846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twybrwxkpihncrahcykwzuunvspytobg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395285.1346257-65-127413634389430/AnsiballZ_stat.py'
Dec 10 19:34:45 compute-0 sudo[78846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:45 compute-0 python3.9[78848]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:34:45 compute-0 sudo[78846]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:46 compute-0 sudo[78969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnacgddjlgdcauvusezjdmchtxvkpjln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395285.1346257-65-127413634389430/AnsiballZ_copy.py'
Dec 10 19:34:46 compute-0 sudo[78969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:46 compute-0 python3.9[78971]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395285.1346257-65-127413634389430/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=847e9c442aa5512adb0ebedd2129c546b5ecdf6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:34:46 compute-0 sudo[78969]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:46 compute-0 sudo[79121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjoqbodbwamwnwfeqiwwbtdxkymntdbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395286.5237877-65-243896127465531/AnsiballZ_stat.py'
Dec 10 19:34:46 compute-0 sudo[79121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:47 compute-0 python3.9[79123]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:34:47 compute-0 sudo[79121]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:47 compute-0 sudo[79244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhfqjdogjmtlakecukrskdscyixbdokl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395286.5237877-65-243896127465531/AnsiballZ_copy.py'
Dec 10 19:34:47 compute-0 sudo[79244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:47 compute-0 python3.9[79246]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395286.5237877-65-243896127465531/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=05c1e59b49b3876fdf64164f32096016028340f2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:34:47 compute-0 sudo[79244]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:48 compute-0 sudo[79396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkaizhavsvfkxfdtddsepipeartbsygk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395287.8512597-109-34007290651388/AnsiballZ_file.py'
Dec 10 19:34:48 compute-0 sudo[79396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:48 compute-0 python3.9[79398]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:34:48 compute-0 sudo[79396]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:48 compute-0 sudo[79548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkwleztcpcnsmgczignqrdqaxdpiwrwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395288.4907868-109-205042565679787/AnsiballZ_file.py'
Dec 10 19:34:48 compute-0 sudo[79548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:48 compute-0 python3.9[79550]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:34:48 compute-0 sudo[79548]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:49 compute-0 sudo[79700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofgcedfklthnqququncnfwiwuethhvcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395289.138736-124-252477663741241/AnsiballZ_stat.py'
Dec 10 19:34:49 compute-0 sudo[79700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:49 compute-0 python3.9[79702]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:34:49 compute-0 sudo[79700]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:50 compute-0 sudo[79823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vruestnfrryahbpaflutdppkjprnnnkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395289.138736-124-252477663741241/AnsiballZ_copy.py'
Dec 10 19:34:50 compute-0 sudo[79823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:50 compute-0 python3.9[79825]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395289.138736-124-252477663741241/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7cd55b0ee251c006e3ba1bccf9b411be13e5d5e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:34:50 compute-0 sudo[79823]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:50 compute-0 sudo[79975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnebllkewctzazpobdhdzdgqtcjtzgbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395290.4651067-124-223813346348244/AnsiballZ_stat.py'
Dec 10 19:34:50 compute-0 sudo[79975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:51 compute-0 python3.9[79977]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:34:51 compute-0 sudo[79975]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:51 compute-0 sudo[80098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezzmnpmoatrgwqaowkycxelcmbqirhgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395290.4651067-124-223813346348244/AnsiballZ_copy.py'
Dec 10 19:34:51 compute-0 sudo[80098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:51 compute-0 python3.9[80100]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395290.4651067-124-223813346348244/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=847e9c442aa5512adb0ebedd2129c546b5ecdf6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:34:51 compute-0 sudo[80098]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:52 compute-0 sudo[80250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrlfbdwzkkkkhydzidpmyvgjfkhfcjzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395291.7728505-124-237595616830720/AnsiballZ_stat.py'
Dec 10 19:34:52 compute-0 sudo[80250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:52 compute-0 python3.9[80252]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:34:52 compute-0 sudo[80250]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:52 compute-0 sudo[80373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjmotspqmaahhosscljpypiehvhcsnbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395291.7728505-124-237595616830720/AnsiballZ_copy.py'
Dec 10 19:34:52 compute-0 sudo[80373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:52 compute-0 python3.9[80375]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395291.7728505-124-237595616830720/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7a3bad5fa11b3976e6981ccdc21e2c51dd800e88 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:34:52 compute-0 sudo[80373]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:53 compute-0 sudo[80525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksmrknrdrkqeblomitcammisqhhvdisv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395292.9628298-168-125081309086623/AnsiballZ_file.py'
Dec 10 19:34:53 compute-0 sudo[80525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:53 compute-0 python3.9[80527]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:34:53 compute-0 sudo[80525]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:53 compute-0 sudo[80677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydgnkyvmsoyykgjeplmvweymfyaailjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395293.5519586-168-155234161757751/AnsiballZ_file.py'
Dec 10 19:34:53 compute-0 sudo[80677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:54 compute-0 python3.9[80679]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:34:54 compute-0 sudo[80677]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:54 compute-0 sudo[80829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amejsmgnbtutsbyldoraweiuknyoanxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395294.1901295-183-253833375578277/AnsiballZ_stat.py'
Dec 10 19:34:54 compute-0 sudo[80829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:54 compute-0 python3.9[80831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:34:54 compute-0 sudo[80829]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:54 compute-0 sudo[80952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbekfalceyqahiqkujswqrfpqdycrpio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395294.1901295-183-253833375578277/AnsiballZ_copy.py'
Dec 10 19:34:54 compute-0 sudo[80952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:55 compute-0 python3.9[80954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395294.1901295-183-253833375578277/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=365860a2eb8a16a467c75c0325a562dd87a63eb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:34:55 compute-0 sudo[80952]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:55 compute-0 sudo[81104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnwsialzjggkgkiasbvkiwkyixlpswsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395295.3429155-183-111192478964194/AnsiballZ_stat.py'
Dec 10 19:34:55 compute-0 sudo[81104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:55 compute-0 python3.9[81106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:34:55 compute-0 sudo[81104]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:56 compute-0 sudo[81227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iewhpqxpvlsllmfdjlvymdsvybfkvget ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395295.3429155-183-111192478964194/AnsiballZ_copy.py'
Dec 10 19:34:56 compute-0 sudo[81227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:56 compute-0 python3.9[81229]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395295.3429155-183-111192478964194/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2ac2558cd2eb318624b5dbba51dbcf594cede1dd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:34:56 compute-0 sudo[81227]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:56 compute-0 sudo[81379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhgobgqeedxloxjkyripqrmpgoglwglh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395296.4323332-183-177513107629637/AnsiballZ_stat.py'
Dec 10 19:34:56 compute-0 sudo[81379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:56 compute-0 python3.9[81381]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:34:56 compute-0 sudo[81379]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:57 compute-0 sudo[81502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tknlivoqgvwxaywjvbnhmlyxmxrkycyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395296.4323332-183-177513107629637/AnsiballZ_copy.py'
Dec 10 19:34:57 compute-0 sudo[81502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:57 compute-0 python3.9[81504]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395296.4323332-183-177513107629637/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=84c47b2bfe4a111002a47775094524aba2b02a39 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:34:57 compute-0 sudo[81502]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:58 compute-0 sudo[81654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xajbfszauvevtkuhkmnyqjzrgavkvnzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395297.703248-227-83915169259849/AnsiballZ_file.py'
Dec 10 19:34:58 compute-0 sudo[81654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:58 compute-0 python3.9[81656]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:34:58 compute-0 sudo[81654]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:58 compute-0 sudo[81806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkodezbrjbuwvqrylvkcfizwjleygsej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395298.3425822-227-270632283181558/AnsiballZ_file.py'
Dec 10 19:34:58 compute-0 sudo[81806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:58 compute-0 python3.9[81808]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:34:58 compute-0 sudo[81806]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:59 compute-0 sudo[81958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxreugermvfapfsucvylkgqjhwmfvcrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395298.949628-242-190583130360433/AnsiballZ_stat.py'
Dec 10 19:34:59 compute-0 sudo[81958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:59 compute-0 python3.9[81960]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:34:59 compute-0 sudo[81958]: pam_unix(sudo:session): session closed for user root
Dec 10 19:34:59 compute-0 sudo[82083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtjabrtseohqiuixmjmtlepqquaaqxqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395298.949628-242-190583130360433/AnsiballZ_copy.py'
Dec 10 19:34:59 compute-0 sudo[82083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:34:59 compute-0 python3.9[82085]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395298.949628-242-190583130360433/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=25de7c4e99244e2a4073e0626d0da472651ba922 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:34:59 compute-0 sudo[82083]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:00 compute-0 sudo[82235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzlsfxhknlnjoepgqohguilgkyofwbgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395300.0876129-242-24815019745376/AnsiballZ_stat.py'
Dec 10 19:35:00 compute-0 sudo[82235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:00 compute-0 python3.9[82237]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:00 compute-0 sudo[82235]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:00 compute-0 sudo[82358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqutbdyxebvnubmersyjyiqswotqqfez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395300.0876129-242-24815019745376/AnsiballZ_copy.py'
Dec 10 19:35:00 compute-0 sudo[82358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:01 compute-0 python3.9[82360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395300.0876129-242-24815019745376/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=623a38b5217e309613b4174c19a545586e571d32 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:01 compute-0 sudo[82358]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:01 compute-0 sshd-session[82008]: Invalid user ubuntu from 101.36.224.146 port 48328
Dec 10 19:35:01 compute-0 sudo[82510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahalmpgncnnozvhuwpbgdevhojakuaxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395301.2088938-242-11003761131431/AnsiballZ_stat.py'
Dec 10 19:35:01 compute-0 sudo[82510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:01 compute-0 python3.9[82512]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:01 compute-0 sudo[82510]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:01 compute-0 sshd-session[82008]: Received disconnect from 101.36.224.146 port 48328:11:  [preauth]
Dec 10 19:35:01 compute-0 sshd-session[82008]: Disconnected from invalid user ubuntu 101.36.224.146 port 48328 [preauth]
Dec 10 19:35:01 compute-0 sudo[82633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmnsyjvexbfbtodsfywexklfwthfypzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395301.2088938-242-11003761131431/AnsiballZ_copy.py'
Dec 10 19:35:01 compute-0 sudo[82633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:02 compute-0 python3.9[82635]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395301.2088938-242-11003761131431/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=8f42c189e578f2149fcb61cdc17d66558e255bd1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:02 compute-0 sudo[82633]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:02 compute-0 sudo[82785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhvipytwqqyjgdipwzwmawspwzfjyygi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395302.3401828-286-99079166194464/AnsiballZ_file.py'
Dec 10 19:35:02 compute-0 sudo[82785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:02 compute-0 python3.9[82787]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:35:02 compute-0 sudo[82785]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:03 compute-0 sudo[82937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecijkcvbwjqltievwcxfzyjcczrybojh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395302.906695-286-209859213449711/AnsiballZ_file.py'
Dec 10 19:35:03 compute-0 sudo[82937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:03 compute-0 python3.9[82939]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:35:03 compute-0 sudo[82937]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:04 compute-0 sudo[83089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpcnsjlulxlmqtujtcmxwdlxvoqmebsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395303.5681636-301-154199941567722/AnsiballZ_stat.py'
Dec 10 19:35:04 compute-0 sudo[83089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:04 compute-0 python3.9[83091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:04 compute-0 sudo[83089]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:04 compute-0 sudo[83212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yonnthawnfsrlkazrzkjuabhjfdaumgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395303.5681636-301-154199941567722/AnsiballZ_copy.py'
Dec 10 19:35:04 compute-0 sudo[83212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:04 compute-0 python3.9[83214]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395303.5681636-301-154199941567722/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c1b5f5751abd04c1b64e336310a5e57ea7b78d89 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:04 compute-0 sudo[83212]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:05 compute-0 sudo[83364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fslppdlwtltfdkkhgqjfentsthntolck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395305.0093546-301-201201238953302/AnsiballZ_stat.py'
Dec 10 19:35:05 compute-0 sudo[83364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:05 compute-0 python3.9[83366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:05 compute-0 sudo[83364]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:06 compute-0 sudo[83487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmlxqjjpacybeczcrqtausmyvjbljgtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395305.0093546-301-201201238953302/AnsiballZ_copy.py'
Dec 10 19:35:06 compute-0 sudo[83487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:06 compute-0 python3.9[83489]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395305.0093546-301-201201238953302/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2ac2558cd2eb318624b5dbba51dbcf594cede1dd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:06 compute-0 sudo[83487]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:06 compute-0 sudo[83639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiiwhxbyubwomeeametscbcxjgsvkbcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395306.6117597-301-190221949801767/AnsiballZ_stat.py'
Dec 10 19:35:06 compute-0 sudo[83639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:07 compute-0 python3.9[83641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:07 compute-0 sudo[83639]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:07 compute-0 sudo[83762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugxfkyyhwykxjrxbpbnpwgopggvyvcol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395306.6117597-301-190221949801767/AnsiballZ_copy.py'
Dec 10 19:35:07 compute-0 sudo[83762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:07 compute-0 python3.9[83764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395306.6117597-301-190221949801767/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=3a3617f0e0b7c3da96f52cefdf5a3161e3676658 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:07 compute-0 sudo[83762]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:08 compute-0 sudo[83914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qftuerxlqwyuhjwpdcrlrsljkvnktjal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395308.4163718-361-221389598505705/AnsiballZ_file.py'
Dec 10 19:35:08 compute-0 sudo[83914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:08 compute-0 python3.9[83916]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:35:08 compute-0 sudo[83914]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:09 compute-0 sudo[84066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grxqbpytvvxjvwcuhqswipbaxawowatq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395309.3211772-369-183845031231642/AnsiballZ_stat.py'
Dec 10 19:35:09 compute-0 sudo[84066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:09 compute-0 python3.9[84068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:09 compute-0 sudo[84066]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:10 compute-0 sudo[84189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjjawyyhpnorsnfaoxpopkoodzkpuspq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395309.3211772-369-183845031231642/AnsiballZ_copy.py'
Dec 10 19:35:10 compute-0 sudo[84189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:10 compute-0 python3.9[84191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395309.3211772-369-183845031231642/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ed0eab07f33e7bd10540e1c9a3e81b31631a82c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:10 compute-0 sudo[84189]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:10 compute-0 sudo[84341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnwvgwgwyloeeyythesjrirxpuocxnie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395310.6054158-385-214933147243967/AnsiballZ_file.py'
Dec 10 19:35:10 compute-0 sudo[84341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:11 compute-0 python3.9[84343]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:35:11 compute-0 sudo[84341]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:11 compute-0 sudo[84493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbcrcexqcxpzmqnnispfikmgnwfxpsmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395311.2286322-393-17804008781689/AnsiballZ_stat.py'
Dec 10 19:35:11 compute-0 sudo[84493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:11 compute-0 python3.9[84495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:11 compute-0 sudo[84493]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:11 compute-0 chronyd[65677]: Selected source 206.108.0.131 (pool.ntp.org)
Dec 10 19:35:12 compute-0 sudo[84616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jifrxqltpvtqulimtamccktikafnyson ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395311.2286322-393-17804008781689/AnsiballZ_copy.py'
Dec 10 19:35:12 compute-0 sudo[84616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:12 compute-0 python3.9[84618]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395311.2286322-393-17804008781689/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ed0eab07f33e7bd10540e1c9a3e81b31631a82c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:12 compute-0 sudo[84616]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:12 compute-0 sudo[84768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffnkbfiigglpkpypnfuuavtixfsaipvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395312.4237247-409-4931553341734/AnsiballZ_file.py'
Dec 10 19:35:12 compute-0 sudo[84768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:12 compute-0 python3.9[84770]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:35:12 compute-0 sudo[84768]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:13 compute-0 sudo[84920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzpsgxndkqkihaftckdufhujumctxvsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395313.1147828-417-232715324187427/AnsiballZ_stat.py'
Dec 10 19:35:13 compute-0 sudo[84920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:13 compute-0 python3.9[84922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:13 compute-0 sudo[84920]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:14 compute-0 sudo[85043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjpakrfhqwhjagwwukozprjglagvsqhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395313.1147828-417-232715324187427/AnsiballZ_copy.py'
Dec 10 19:35:14 compute-0 sudo[85043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:14 compute-0 python3.9[85045]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395313.1147828-417-232715324187427/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ed0eab07f33e7bd10540e1c9a3e81b31631a82c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:14 compute-0 sudo[85043]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:14 compute-0 sudo[85195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkugbmnmnbqgtdtepwhnpcefdyohkpji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395314.4439976-433-144496289290235/AnsiballZ_file.py'
Dec 10 19:35:14 compute-0 sudo[85195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:14 compute-0 python3.9[85197]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:35:14 compute-0 sudo[85195]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:15 compute-0 sudo[85347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxananpkxzmbwwetiydvoryhdneorqmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395315.0789707-441-8733208650306/AnsiballZ_stat.py'
Dec 10 19:35:15 compute-0 sudo[85347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:15 compute-0 python3.9[85349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:15 compute-0 sudo[85347]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:16 compute-0 sudo[85470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxoqyijcdegpahksgvpkzhiprtzszuub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395315.0789707-441-8733208650306/AnsiballZ_copy.py'
Dec 10 19:35:16 compute-0 sudo[85470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:16 compute-0 python3.9[85472]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395315.0789707-441-8733208650306/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ed0eab07f33e7bd10540e1c9a3e81b31631a82c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:16 compute-0 sudo[85470]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:16 compute-0 sudo[85622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwthakztrhfcbrisvuzblzesygtsrpfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395316.4838777-457-193949056947255/AnsiballZ_file.py'
Dec 10 19:35:16 compute-0 sudo[85622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:17 compute-0 python3.9[85624]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:35:17 compute-0 sudo[85622]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:17 compute-0 sudo[85774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdeahbyqtuzojkpcqumcihzvkywxtyqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395317.2328718-465-223774933039755/AnsiballZ_stat.py'
Dec 10 19:35:17 compute-0 sudo[85774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:17 compute-0 python3.9[85776]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:17 compute-0 sudo[85774]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:18 compute-0 sudo[85897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pexlfrzcljrvhtesbmjpaxfgugixvxuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395317.2328718-465-223774933039755/AnsiballZ_copy.py'
Dec 10 19:35:18 compute-0 sudo[85897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:18 compute-0 python3.9[85899]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395317.2328718-465-223774933039755/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ed0eab07f33e7bd10540e1c9a3e81b31631a82c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:18 compute-0 sudo[85897]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:18 compute-0 sudo[86049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfrjnmmacoehepkkjbqtbfceuabhvevq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395318.5961912-481-89797249001338/AnsiballZ_file.py'
Dec 10 19:35:18 compute-0 sudo[86049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:19 compute-0 python3.9[86051]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:35:19 compute-0 sudo[86049]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:19 compute-0 sudo[86201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axomyebbdufhaiandlmxynbmgujdamqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395319.391324-489-62111843191699/AnsiballZ_stat.py'
Dec 10 19:35:19 compute-0 sudo[86201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:19 compute-0 python3.9[86203]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:19 compute-0 sudo[86201]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:20 compute-0 sudo[86324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xatyekchgkcowvcqtxtykxqyqvafahxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395319.391324-489-62111843191699/AnsiballZ_copy.py'
Dec 10 19:35:20 compute-0 sudo[86324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:20 compute-0 python3.9[86326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395319.391324-489-62111843191699/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ed0eab07f33e7bd10540e1c9a3e81b31631a82c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:20 compute-0 sudo[86324]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:21 compute-0 sudo[86476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obtnklhqcjqxuwqabfwjhgyldeoaotlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395320.8321712-505-158386732762538/AnsiballZ_file.py'
Dec 10 19:35:21 compute-0 sudo[86476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:21 compute-0 python3.9[86478]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:35:21 compute-0 sudo[86476]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:21 compute-0 sudo[86628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjuuzrjgoruledhbafcxpobcxithazcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395321.4692965-513-237760232489319/AnsiballZ_stat.py'
Dec 10 19:35:21 compute-0 sudo[86628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:21 compute-0 python3.9[86630]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:21 compute-0 sudo[86628]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:22 compute-0 sudo[86751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evovzhebbjuifrsnzqiwiuipsvdcefdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395321.4692965-513-237760232489319/AnsiballZ_copy.py'
Dec 10 19:35:22 compute-0 sudo[86751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:22 compute-0 python3.9[86753]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395321.4692965-513-237760232489319/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ed0eab07f33e7bd10540e1c9a3e81b31631a82c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:22 compute-0 sudo[86751]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:23 compute-0 sudo[86903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxwscoabfyqebcuaogzguadtirksuuco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395322.8100877-529-178760229024508/AnsiballZ_file.py'
Dec 10 19:35:23 compute-0 sudo[86903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:23 compute-0 python3.9[86905]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:35:23 compute-0 sudo[86903]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:23 compute-0 sudo[87055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lutgydzqpeyekfxzdlsqbuefmgtgoosz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395323.5350935-537-51452494020695/AnsiballZ_stat.py'
Dec 10 19:35:23 compute-0 sudo[87055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:24 compute-0 python3.9[87057]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:24 compute-0 sudo[87055]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:24 compute-0 sudo[87178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfcxpyonsvkqumetpvjrwqsbqzmxcbsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395323.5350935-537-51452494020695/AnsiballZ_copy.py'
Dec 10 19:35:24 compute-0 sudo[87178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:24 compute-0 python3.9[87180]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395323.5350935-537-51452494020695/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ed0eab07f33e7bd10540e1c9a3e81b31631a82c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:24 compute-0 sudo[87178]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:25 compute-0 sshd-session[77963]: Connection closed by 192.168.122.30 port 48850
Dec 10 19:35:25 compute-0 sshd-session[77960]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:35:25 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Dec 10 19:35:25 compute-0 systemd[1]: session-19.scope: Consumed 36.529s CPU time.
Dec 10 19:35:25 compute-0 systemd-logind[789]: Session 19 logged out. Waiting for processes to exit.
Dec 10 19:35:25 compute-0 systemd-logind[789]: Removed session 19.
Dec 10 19:35:30 compute-0 sshd-session[87205]: Accepted publickey for zuul from 192.168.122.30 port 52598 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:35:30 compute-0 systemd-logind[789]: New session 20 of user zuul.
Dec 10 19:35:30 compute-0 systemd[1]: Started Session 20 of User zuul.
Dec 10 19:35:30 compute-0 sshd-session[87205]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:35:31 compute-0 python3.9[87358]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:35:32 compute-0 sudo[87512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfahobevtyklsidgexlwpemlnprexerj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395331.6910253-34-228631259457408/AnsiballZ_file.py'
Dec 10 19:35:32 compute-0 sudo[87512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:32 compute-0 python3.9[87514]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:35:32 compute-0 sudo[87512]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:32 compute-0 sudo[87664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erqmebruqgluxcklhhxyrnsqmzgmgnnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395332.3968298-34-214183946928417/AnsiballZ_file.py'
Dec 10 19:35:32 compute-0 sudo[87664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:32 compute-0 python3.9[87666]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:35:32 compute-0 sudo[87664]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:33 compute-0 python3.9[87816]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:35:34 compute-0 sudo[87966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvnlkvehwuggrjobfuycmevwxjdzqabh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395333.9138448-57-167566529256681/AnsiballZ_seboolean.py'
Dec 10 19:35:34 compute-0 sudo[87966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:34 compute-0 python3.9[87968]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 10 19:35:35 compute-0 sudo[87966]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:36 compute-0 sudo[88122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfsswfkxgvlyoydyqelyuprbneklwfzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395336.2056332-67-158607745879056/AnsiballZ_setup.py'
Dec 10 19:35:36 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 10 19:35:36 compute-0 sudo[88122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:36 compute-0 python3.9[88124]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:35:37 compute-0 sudo[88122]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:37 compute-0 sudo[88206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qokwfcrqquwqgeobfpirhiurnpjxsghn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395336.2056332-67-158607745879056/AnsiballZ_dnf.py'
Dec 10 19:35:37 compute-0 sudo[88206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:37 compute-0 python3.9[88208]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:35:39 compute-0 sudo[88206]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:39 compute-0 sudo[88359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxxwaasgeporcmfdoxwshxbmhbwyhalr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395339.3265386-79-208904183062848/AnsiballZ_systemd.py'
Dec 10 19:35:39 compute-0 sudo[88359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:40 compute-0 python3.9[88361]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 10 19:35:40 compute-0 sudo[88359]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:40 compute-0 sudo[88514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scloqydajweuergcooxeglzzxtnqyazv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765395340.4534798-87-267618082447756/AnsiballZ_edpm_nftables_snippet.py'
Dec 10 19:35:40 compute-0 sudo[88514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:41 compute-0 python3[88516]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 10 19:35:41 compute-0 sudo[88514]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:41 compute-0 sudo[88666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryngpqqpcjlhlrtuiiwqoihjosoaouhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395341.4342747-96-4544432923071/AnsiballZ_file.py'
Dec 10 19:35:41 compute-0 sudo[88666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:41 compute-0 python3.9[88668]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:41 compute-0 sudo[88666]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:42 compute-0 sudo[88818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tppmqidxthoelyyzvgjhviymvblzkjpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395342.008352-104-229642523041496/AnsiballZ_stat.py'
Dec 10 19:35:42 compute-0 sudo[88818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:42 compute-0 python3.9[88820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:42 compute-0 sudo[88818]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:42 compute-0 sudo[88896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nitdkvhrbvprtovlulgboqvxbbzspkfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395342.008352-104-229642523041496/AnsiballZ_file.py'
Dec 10 19:35:42 compute-0 sudo[88896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:43 compute-0 python3.9[88898]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:43 compute-0 sudo[88896]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:43 compute-0 sudo[89048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giybqppexioanfizbblzxglgowpodokm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395343.205107-116-106943267448574/AnsiballZ_stat.py'
Dec 10 19:35:43 compute-0 sudo[89048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:43 compute-0 python3.9[89050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:43 compute-0 sudo[89048]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:43 compute-0 sudo[89126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rquypomzpafoxuanizrvmecaryekzmlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395343.205107-116-106943267448574/AnsiballZ_file.py'
Dec 10 19:35:43 compute-0 sudo[89126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:44 compute-0 python3.9[89128]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.pwo470mc recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:44 compute-0 sudo[89126]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:44 compute-0 sudo[89278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjucylycjbshbhhjactnodmxtyuglmfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395344.2212408-128-99326664091469/AnsiballZ_stat.py'
Dec 10 19:35:44 compute-0 sudo[89278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:44 compute-0 python3.9[89280]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:44 compute-0 sudo[89278]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:44 compute-0 sudo[89356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubpkvulbysitodyyidhncuqpgggekmxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395344.2212408-128-99326664091469/AnsiballZ_file.py'
Dec 10 19:35:44 compute-0 sudo[89356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:45 compute-0 python3.9[89358]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:45 compute-0 sudo[89356]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:45 compute-0 sudo[89508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgmbljwvczvqpnosfqzxitycwcbcrsvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395345.2795584-141-82477951149262/AnsiballZ_command.py'
Dec 10 19:35:45 compute-0 sudo[89508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:45 compute-0 python3.9[89510]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:35:45 compute-0 sudo[89508]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:46 compute-0 sudo[89661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwtzaxzadyhvaqbcgouroaylfajmaodo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765395346.1099386-149-199631007520612/AnsiballZ_edpm_nftables_from_files.py'
Dec 10 19:35:46 compute-0 sudo[89661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:46 compute-0 python3[89663]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 10 19:35:46 compute-0 sudo[89661]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:47 compute-0 sudo[89813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkmefnxjwvfnfvgvsmlccbvrltfluzjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395346.8652651-157-221277200224354/AnsiballZ_stat.py'
Dec 10 19:35:47 compute-0 sudo[89813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:47 compute-0 python3.9[89815]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:47 compute-0 sudo[89813]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:47 compute-0 sudo[89938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inxujkjnwcsoshxdjsdejmnjmfpjhtid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395346.8652651-157-221277200224354/AnsiballZ_copy.py'
Dec 10 19:35:47 compute-0 sudo[89938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:48 compute-0 python3.9[89940]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395346.8652651-157-221277200224354/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:48 compute-0 sudo[89938]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:48 compute-0 sudo[90090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eygcwkpclxjrfxrokhoyyigruqifntbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395348.1915996-172-203543428916520/AnsiballZ_stat.py'
Dec 10 19:35:48 compute-0 sudo[90090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:48 compute-0 python3.9[90092]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:48 compute-0 sudo[90090]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:49 compute-0 sudo[90215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srjewrbzjdvgvxmkzlivuvorwpfkfuao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395348.1915996-172-203543428916520/AnsiballZ_copy.py'
Dec 10 19:35:49 compute-0 sudo[90215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:49 compute-0 python3.9[90217]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395348.1915996-172-203543428916520/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:49 compute-0 sudo[90215]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:49 compute-0 sudo[90367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsbvokugnmprbqoneoqqwaizlwdbskeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395349.3656144-187-213918515586984/AnsiballZ_stat.py'
Dec 10 19:35:49 compute-0 sudo[90367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:49 compute-0 python3.9[90369]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:49 compute-0 sudo[90367]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:50 compute-0 sudo[90492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjturdtvqaqcoudzjznkfcefdjjudvgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395349.3656144-187-213918515586984/AnsiballZ_copy.py'
Dec 10 19:35:50 compute-0 sudo[90492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:50 compute-0 python3.9[90494]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395349.3656144-187-213918515586984/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:50 compute-0 sudo[90492]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:50 compute-0 sudo[90644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfxsxqmkuzxpmerjxugzxxiijkjzfbaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395350.6642704-202-108164880617009/AnsiballZ_stat.py'
Dec 10 19:35:51 compute-0 sudo[90644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:51 compute-0 python3.9[90646]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:51 compute-0 sudo[90644]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:51 compute-0 sudo[90769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyhiwhhiqagiwurdyuduhsjpmsopfdvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395350.6642704-202-108164880617009/AnsiballZ_copy.py'
Dec 10 19:35:51 compute-0 sudo[90769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:51 compute-0 python3.9[90771]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395350.6642704-202-108164880617009/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:51 compute-0 sudo[90769]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:52 compute-0 sudo[90921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkipxzozwwcvxyozvvrrhrxxcrdxnkxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395352.127469-217-256896261095413/AnsiballZ_stat.py'
Dec 10 19:35:52 compute-0 sudo[90921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:52 compute-0 python3.9[90923]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:35:52 compute-0 sudo[90921]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:53 compute-0 sudo[91046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpjsiqnqlizlrgsggpodlseveggytvde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395352.127469-217-256896261095413/AnsiballZ_copy.py'
Dec 10 19:35:53 compute-0 sudo[91046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:53 compute-0 python3.9[91048]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395352.127469-217-256896261095413/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:53 compute-0 sudo[91046]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:53 compute-0 sudo[91198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xywaxilqibqexhuacqfsggpqfpipmqgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395353.427318-232-64524487342466/AnsiballZ_file.py'
Dec 10 19:35:53 compute-0 sudo[91198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:53 compute-0 python3.9[91200]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:53 compute-0 sudo[91198]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:54 compute-0 sudo[91350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znfakrkwjijnssqhwwabjqqxvutacghs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395354.0230038-240-100660708204328/AnsiballZ_command.py'
Dec 10 19:35:54 compute-0 sudo[91350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:54 compute-0 python3.9[91352]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:35:54 compute-0 sudo[91350]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:55 compute-0 sudo[91505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdayuarwiyvqjefnlpzdedllqflfykza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395354.6662004-248-132737541263749/AnsiballZ_blockinfile.py'
Dec 10 19:35:55 compute-0 sudo[91505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:55 compute-0 python3.9[91507]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:55 compute-0 sudo[91505]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:55 compute-0 sudo[91657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbzswfjofrbkwiardhsmspbvmqdttytd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395355.53341-257-188739767445644/AnsiballZ_command.py'
Dec 10 19:35:55 compute-0 sudo[91657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:55 compute-0 python3.9[91659]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:35:56 compute-0 sudo[91657]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:56 compute-0 sudo[91810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rowgivgakwlvvxlkmzgsxwbizhwelycs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395356.1673481-265-244059070707220/AnsiballZ_stat.py'
Dec 10 19:35:56 compute-0 sudo[91810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:56 compute-0 python3.9[91812]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:35:56 compute-0 sudo[91810]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:57 compute-0 sudo[91964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqstlnlghqqeyeuuyfnelluihxjnuthi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395356.8218706-273-85117531737649/AnsiballZ_command.py'
Dec 10 19:35:57 compute-0 sudo[91964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:57 compute-0 python3.9[91966]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:35:57 compute-0 sudo[91964]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:57 compute-0 sudo[92119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nebictjxgoffqhrcusiobchjqwazyeuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395357.4620173-281-2320676764296/AnsiballZ_file.py'
Dec 10 19:35:57 compute-0 sudo[92119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:35:57 compute-0 python3.9[92121]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:35:57 compute-0 sudo[92119]: pam_unix(sudo:session): session closed for user root
Dec 10 19:35:58 compute-0 python3.9[92271]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:35:59 compute-0 sudo[92422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sheumpkrahawczmilukagfohgzpbujtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395359.477934-321-78149690003037/AnsiballZ_command.py'
Dec 10 19:35:59 compute-0 sudo[92422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:00 compute-0 python3.9[92424]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:cb:58:d7:dd" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:36:00 compute-0 ovs-vsctl[92425]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:cb:58:d7:dd external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 10 19:36:00 compute-0 sudo[92422]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:00 compute-0 sudo[92575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xakusbvksczndcmuxibtwkvknofufjqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395360.2796528-330-59778638219595/AnsiballZ_command.py'
Dec 10 19:36:00 compute-0 sudo[92575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:00 compute-0 python3.9[92577]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:36:00 compute-0 sudo[92575]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:01 compute-0 sudo[92730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gknwjfqcftzbszknugplmtvjujhgelfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395360.9722733-338-257983090982717/AnsiballZ_command.py'
Dec 10 19:36:01 compute-0 sudo[92730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:01 compute-0 python3.9[92732]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:36:01 compute-0 ovs-vsctl[92733]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 10 19:36:01 compute-0 sudo[92730]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:02 compute-0 python3.9[92883]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:36:02 compute-0 sudo[93035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hefbiyenmyiohlvnzxedjftixbuqkbnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395362.328723-355-222389906808801/AnsiballZ_file.py'
Dec 10 19:36:02 compute-0 sudo[93035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:02 compute-0 python3.9[93037]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:02 compute-0 sudo[93035]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:03 compute-0 sudo[93187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrwdurejaflldizhhvajxjqtilckkesh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395363.1001835-363-5651208753606/AnsiballZ_stat.py'
Dec 10 19:36:03 compute-0 sudo[93187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:03 compute-0 python3.9[93189]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:03 compute-0 sudo[93187]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:03 compute-0 sudo[93265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhlxbwisutnmrvpeqddixswimoujuxfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395363.1001835-363-5651208753606/AnsiballZ_file.py'
Dec 10 19:36:03 compute-0 sudo[93265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:04 compute-0 python3.9[93267]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:04 compute-0 sudo[93265]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:04 compute-0 sudo[93417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbryjqjcqqzzowedaomuolwuwymfaqlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395364.313117-363-33162596493468/AnsiballZ_stat.py'
Dec 10 19:36:04 compute-0 sudo[93417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:04 compute-0 python3.9[93419]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:04 compute-0 sudo[93417]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:05 compute-0 sudo[93495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uihjlbhukweudpbyjffqitbcktovjxmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395364.313117-363-33162596493468/AnsiballZ_file.py'
Dec 10 19:36:05 compute-0 sudo[93495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:05 compute-0 python3.9[93497]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:05 compute-0 sudo[93495]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:05 compute-0 sudo[93647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhjtrifgtjsffntqmoishhqnwazyyxfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395365.3798442-386-255016301669529/AnsiballZ_file.py'
Dec 10 19:36:05 compute-0 sudo[93647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:06 compute-0 python3.9[93649]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:36:06 compute-0 sudo[93647]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:06 compute-0 sudo[93799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmwzbirzyjwhxriojpiihngveebgfloo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395366.313794-394-121964358336685/AnsiballZ_stat.py'
Dec 10 19:36:06 compute-0 sudo[93799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:06 compute-0 python3.9[93801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:06 compute-0 sudo[93799]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:07 compute-0 sudo[93877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyezfctxvjaeqxyaqcqjoabkuytwwyfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395366.313794-394-121964358336685/AnsiballZ_file.py'
Dec 10 19:36:07 compute-0 sudo[93877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:07 compute-0 python3.9[93879]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:36:07 compute-0 sudo[93877]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:08 compute-0 sudo[94029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyftqngjtqmrtcoqffslkozzhfbpqqxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395367.7872097-406-181152590234815/AnsiballZ_stat.py'
Dec 10 19:36:08 compute-0 sudo[94029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:08 compute-0 python3.9[94031]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:08 compute-0 sudo[94029]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:08 compute-0 sudo[94107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqolestgywioimdvmemywwvirwjcslme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395367.7872097-406-181152590234815/AnsiballZ_file.py'
Dec 10 19:36:08 compute-0 sudo[94107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:08 compute-0 python3.9[94109]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:36:08 compute-0 sudo[94107]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:09 compute-0 sudo[94259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkruxfryvrvcbudfqtcvswdkqeriosge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395368.9140313-418-122789821169183/AnsiballZ_systemd.py'
Dec 10 19:36:09 compute-0 sudo[94259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:09 compute-0 python3.9[94261]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:36:09 compute-0 systemd[1]: Reloading.
Dec 10 19:36:09 compute-0 systemd-rc-local-generator[94286]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:36:09 compute-0 systemd-sysv-generator[94291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:36:09 compute-0 sudo[94259]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:10 compute-0 sudo[94447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akkdruobllsbusyxzqupradarcazbzar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395369.9986126-426-256803690630359/AnsiballZ_stat.py'
Dec 10 19:36:10 compute-0 sudo[94447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:10 compute-0 python3.9[94449]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:10 compute-0 sudo[94447]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:10 compute-0 sudo[94525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-setyapiuvulxdkstefhujgwfwjijpkry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395369.9986126-426-256803690630359/AnsiballZ_file.py'
Dec 10 19:36:10 compute-0 sudo[94525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:11 compute-0 python3.9[94527]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:36:11 compute-0 sudo[94525]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:11 compute-0 sudo[94677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxvkgviojuolwatscntjdiumnuvqaqsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395371.234947-438-248632998779782/AnsiballZ_stat.py'
Dec 10 19:36:11 compute-0 sudo[94677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:11 compute-0 python3.9[94679]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:11 compute-0 sudo[94677]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:12 compute-0 sudo[94755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeqyhqeqwarkdofcvcdbkajttuhvsssu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395371.234947-438-248632998779782/AnsiballZ_file.py'
Dec 10 19:36:12 compute-0 sudo[94755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:12 compute-0 python3.9[94757]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:36:12 compute-0 sudo[94755]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:12 compute-0 sudo[94907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewabfhmdmnpnzebddxpzdnklyxlxdxxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395372.4330344-450-260207530112741/AnsiballZ_systemd.py'
Dec 10 19:36:12 compute-0 sudo[94907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:12 compute-0 python3.9[94909]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:36:13 compute-0 systemd[1]: Reloading.
Dec 10 19:36:13 compute-0 systemd-rc-local-generator[94937]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:36:13 compute-0 systemd-sysv-generator[94941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:36:13 compute-0 systemd[1]: Starting Create netns directory...
Dec 10 19:36:13 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 10 19:36:13 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 10 19:36:13 compute-0 systemd[1]: Finished Create netns directory.
Dec 10 19:36:13 compute-0 sudo[94907]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:13 compute-0 sudo[95100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpirondktiiyxrmvqckcgbmypuuhaaaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395373.5453286-460-210296289631207/AnsiballZ_file.py'
Dec 10 19:36:13 compute-0 sudo[95100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:14 compute-0 python3.9[95102]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:14 compute-0 sudo[95100]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:14 compute-0 sudo[95252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfafpayjptgdpnvpztljsaavkzwpbmqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395374.1963587-468-159430165685008/AnsiballZ_stat.py'
Dec 10 19:36:14 compute-0 sudo[95252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:14 compute-0 python3.9[95254]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:14 compute-0 sudo[95252]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:14 compute-0 sudo[95375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aizayrmjyndkbaotsxcfbkodpvjlelsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395374.1963587-468-159430165685008/AnsiballZ_copy.py'
Dec 10 19:36:14 compute-0 sudo[95375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:15 compute-0 python3.9[95377]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395374.1963587-468-159430165685008/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:15 compute-0 sudo[95375]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:15 compute-0 sudo[95527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twcueajurcsapkpdckwzfugavtyycvbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395375.5300982-485-134688976615880/AnsiballZ_file.py'
Dec 10 19:36:15 compute-0 sudo[95527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:15 compute-0 python3.9[95529]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:15 compute-0 sudo[95527]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:16 compute-0 sudo[95679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbckrbswdhtxntjsuvxstljlfeiwphse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395376.1546798-493-67410813845505/AnsiballZ_stat.py'
Dec 10 19:36:16 compute-0 sudo[95679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:16 compute-0 python3.9[95681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:16 compute-0 sudo[95679]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:16 compute-0 sudo[95802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhksttsupsmvbgcauvwvknkgwaqonzej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395376.1546798-493-67410813845505/AnsiballZ_copy.py'
Dec 10 19:36:16 compute-0 sudo[95802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:17 compute-0 python3.9[95804]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395376.1546798-493-67410813845505/.source.json _original_basename=.j6ufx47z follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:36:17 compute-0 sudo[95802]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:17 compute-0 sudo[95954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtihwuzdqdrelepnssohicdmlaaouuom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395377.3743625-508-153143299943623/AnsiballZ_file.py'
Dec 10 19:36:17 compute-0 sudo[95954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:17 compute-0 python3.9[95956]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:36:17 compute-0 sudo[95954]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:18 compute-0 sudo[96106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opsennwxlrnogjmofgsvnjphqdodoamo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395378.1235552-516-8920038726660/AnsiballZ_stat.py'
Dec 10 19:36:18 compute-0 sudo[96106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:18 compute-0 sudo[96106]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:19 compute-0 sudo[96229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geiyhnsyhzvltkgifwgmzwclttbinqfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395378.1235552-516-8920038726660/AnsiballZ_copy.py'
Dec 10 19:36:19 compute-0 sudo[96229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:19 compute-0 sudo[96229]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:20 compute-0 sudo[96381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abglbiiwzkvfegdpkbgbnsqmglttajot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395379.7067585-533-145946888542483/AnsiballZ_container_config_data.py'
Dec 10 19:36:20 compute-0 sudo[96381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:20 compute-0 python3.9[96383]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 10 19:36:20 compute-0 sudo[96381]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:21 compute-0 sudo[96533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sewcaqgczqvrbzvskywzcnpaelmlgmyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395380.6348548-542-99174034155737/AnsiballZ_container_config_hash.py'
Dec 10 19:36:21 compute-0 sudo[96533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:21 compute-0 python3.9[96535]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 10 19:36:21 compute-0 sudo[96533]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:21 compute-0 sudo[96685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsbmhimtphchispuzjknaoejptkvysbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395381.4962223-551-58201423006014/AnsiballZ_podman_container_info.py'
Dec 10 19:36:21 compute-0 sudo[96685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:22 compute-0 python3.9[96687]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 10 19:36:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:36:22 compute-0 sudo[96685]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:23 compute-0 sudo[96849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spbzqvkfpxallqlnfaeiborybdxvwety ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765395382.5977585-564-104610008903190/AnsiballZ_edpm_container_manage.py'
Dec 10 19:36:23 compute-0 sudo[96849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:23 compute-0 python3[96851]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 10 19:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:36:23 compute-0 podman[96888]: 2025-12-10 19:36:23.569486328 +0000 UTC m=+0.079463127 container create 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Dec 10 19:36:23 compute-0 podman[96888]: 2025-12-10 19:36:23.517016622 +0000 UTC m=+0.026993451 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 10 19:36:23 compute-0 python3[96851]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 10 19:36:23 compute-0 sudo[96849]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:24 compute-0 sudo[97074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwzarzizfdkuxnfgojbsqsmvibghhgpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395383.9051018-572-139829929054128/AnsiballZ_stat.py'
Dec 10 19:36:24 compute-0 sudo[97074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:24 compute-0 python3.9[97076]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:36:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 10 19:36:24 compute-0 sudo[97074]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:24 compute-0 sudo[97228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oervbpckgqvftofghbrhmctwmhssgjzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395384.7016928-581-48850658900168/AnsiballZ_file.py'
Dec 10 19:36:24 compute-0 sudo[97228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:25 compute-0 python3.9[97230]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:36:25 compute-0 sudo[97228]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:25 compute-0 sudo[97304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcvmsgjnxlbhhvdxplknwgxzcgxqemxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395384.7016928-581-48850658900168/AnsiballZ_stat.py'
Dec 10 19:36:25 compute-0 sudo[97304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:25 compute-0 python3.9[97306]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:36:25 compute-0 sudo[97304]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:26 compute-0 sudo[97455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrinhsyhpsjptrweoeugzgmworceqaaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395385.786624-581-170031033663697/AnsiballZ_copy.py'
Dec 10 19:36:26 compute-0 sudo[97455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:26 compute-0 python3.9[97457]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765395385.786624-581-170031033663697/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:36:26 compute-0 sudo[97455]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:26 compute-0 sudo[97531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfvfghqdkapfopyiekemfjmggknyeski ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395385.786624-581-170031033663697/AnsiballZ_systemd.py'
Dec 10 19:36:26 compute-0 sudo[97531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:27 compute-0 python3.9[97533]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:36:27 compute-0 systemd[1]: Reloading.
Dec 10 19:36:27 compute-0 systemd-rc-local-generator[97564]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:36:27 compute-0 systemd-sysv-generator[97567]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:36:27 compute-0 sudo[97531]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:27 compute-0 sudo[97643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crlfrlfcgvvrjkxrmxoqhmwbyfqrkjtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395385.786624-581-170031033663697/AnsiballZ_systemd.py'
Dec 10 19:36:27 compute-0 sudo[97643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:27 compute-0 python3.9[97645]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:36:27 compute-0 systemd[1]: Reloading.
Dec 10 19:36:28 compute-0 systemd-sysv-generator[97679]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:36:28 compute-0 systemd-rc-local-generator[97675]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:36:28 compute-0 systemd[1]: Starting ovn_controller container...
Dec 10 19:36:28 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 10 19:36:28 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:36:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28b1d733a06953c208746398c2bdfd1a1e74da657e97ad6374ec409f8e2d0c6/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 10 19:36:28 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17.
Dec 10 19:36:28 compute-0 podman[97686]: 2025-12-10 19:36:28.387743235 +0000 UTC m=+0.149973307 container init 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 10 19:36:28 compute-0 ovn_controller[97701]: + sudo -E kolla_set_configs
Dec 10 19:36:28 compute-0 podman[97686]: 2025-12-10 19:36:28.424035577 +0000 UTC m=+0.186265589 container start 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 10 19:36:28 compute-0 edpm-start-podman-container[97686]: ovn_controller
Dec 10 19:36:28 compute-0 systemd[1]: Created slice User Slice of UID 0.
Dec 10 19:36:28 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 10 19:36:28 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 10 19:36:28 compute-0 systemd[1]: Starting User Manager for UID 0...
Dec 10 19:36:28 compute-0 systemd[97730]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Dec 10 19:36:28 compute-0 edpm-start-podman-container[97685]: Creating additional drop-in dependency for "ovn_controller" (9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17)
Dec 10 19:36:28 compute-0 podman[97708]: 2025-12-10 19:36:28.507470059 +0000 UTC m=+0.071440476 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Dec 10 19:36:28 compute-0 systemd[1]: 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17-16a4aefce5757b4a.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 19:36:28 compute-0 systemd[1]: 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17-16a4aefce5757b4a.service: Failed with result 'exit-code'.
Dec 10 19:36:28 compute-0 systemd[1]: Reloading.
Dec 10 19:36:28 compute-0 systemd[97730]: Queued start job for default target Main User Target.
Dec 10 19:36:28 compute-0 systemd-rc-local-generator[97790]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:36:28 compute-0 systemd[97730]: Created slice User Application Slice.
Dec 10 19:36:28 compute-0 systemd[97730]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 10 19:36:28 compute-0 systemd[97730]: Started Daily Cleanup of User's Temporary Directories.
Dec 10 19:36:28 compute-0 systemd[97730]: Reached target Paths.
Dec 10 19:36:28 compute-0 systemd[97730]: Reached target Timers.
Dec 10 19:36:28 compute-0 systemd-sysv-generator[97795]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:36:28 compute-0 systemd[97730]: Starting D-Bus User Message Bus Socket...
Dec 10 19:36:28 compute-0 systemd[97730]: Starting Create User's Volatile Files and Directories...
Dec 10 19:36:28 compute-0 systemd[97730]: Finished Create User's Volatile Files and Directories.
Dec 10 19:36:28 compute-0 systemd[97730]: Listening on D-Bus User Message Bus Socket.
Dec 10 19:36:28 compute-0 systemd[97730]: Reached target Sockets.
Dec 10 19:36:28 compute-0 systemd[97730]: Reached target Basic System.
Dec 10 19:36:28 compute-0 systemd[97730]: Reached target Main User Target.
Dec 10 19:36:28 compute-0 systemd[97730]: Startup finished in 119ms.
Dec 10 19:36:28 compute-0 systemd[1]: Started User Manager for UID 0.
Dec 10 19:36:28 compute-0 systemd[1]: Started ovn_controller container.
Dec 10 19:36:28 compute-0 systemd[1]: Started Session c1 of User root.
Dec 10 19:36:28 compute-0 sudo[97643]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:28 compute-0 ovn_controller[97701]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 10 19:36:28 compute-0 ovn_controller[97701]: INFO:__main__:Validating config file
Dec 10 19:36:28 compute-0 ovn_controller[97701]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 10 19:36:28 compute-0 ovn_controller[97701]: INFO:__main__:Writing out command to execute
Dec 10 19:36:28 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 10 19:36:28 compute-0 ovn_controller[97701]: ++ cat /run_command
Dec 10 19:36:28 compute-0 ovn_controller[97701]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 10 19:36:28 compute-0 ovn_controller[97701]: + ARGS=
Dec 10 19:36:28 compute-0 ovn_controller[97701]: + sudo kolla_copy_cacerts
Dec 10 19:36:28 compute-0 systemd[1]: Started Session c2 of User root.
Dec 10 19:36:28 compute-0 ovn_controller[97701]: + [[ ! -n '' ]]
Dec 10 19:36:28 compute-0 ovn_controller[97701]: + . kolla_extend_start
Dec 10 19:36:28 compute-0 ovn_controller[97701]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 10 19:36:28 compute-0 ovn_controller[97701]: + umask 0022
Dec 10 19:36:28 compute-0 ovn_controller[97701]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 10 19:36:28 compute-0 ovn_controller[97701]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 10 19:36:28 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 10 19:36:28 compute-0 NetworkManager[56238]: <info>  [1765395388.8957] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 10 19:36:28 compute-0 NetworkManager[56238]: <info>  [1765395388.8968] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 19:36:28 compute-0 NetworkManager[56238]: <warn>  [1765395388.8971] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 10 19:36:28 compute-0 NetworkManager[56238]: <info>  [1765395388.8977] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Dec 10 19:36:28 compute-0 NetworkManager[56238]: <info>  [1765395388.8984] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Dec 10 19:36:28 compute-0 NetworkManager[56238]: <info>  [1765395388.8988] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 10 19:36:28 compute-0 kernel: br-int: entered promiscuous mode
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00024|main|INFO|OVS feature set changed, force recompute.
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 10 19:36:28 compute-0 ovn_controller[97701]: 2025-12-10T19:36:28Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 10 19:36:28 compute-0 NetworkManager[56238]: <info>  [1765395388.9255] manager: (ovn-6455d0-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 10 19:36:28 compute-0 systemd-udevd[97856]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 19:36:28 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Dec 10 19:36:28 compute-0 NetworkManager[56238]: <info>  [1765395388.9414] device (genev_sys_6081): carrier: link connected
Dec 10 19:36:28 compute-0 NetworkManager[56238]: <info>  [1765395388.9417] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Dec 10 19:36:28 compute-0 systemd-udevd[97863]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 19:36:29 compute-0 sudo[97969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwnlrwhmkommusaoqigbjtoajddinhka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395388.9470234-609-64893092127758/AnsiballZ_command.py'
Dec 10 19:36:29 compute-0 sudo[97969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:29 compute-0 python3.9[97971]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:36:29 compute-0 ovs-vsctl[97972]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 10 19:36:29 compute-0 sudo[97969]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:29 compute-0 sudo[98122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qefmjguvmijbjdupzjuhrrzdwllnsruw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395389.6036932-617-42895596642461/AnsiballZ_command.py'
Dec 10 19:36:29 compute-0 sudo[98122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:30 compute-0 python3.9[98124]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:36:30 compute-0 ovs-vsctl[98126]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 10 19:36:30 compute-0 sudo[98122]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:30 compute-0 sudo[98277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjiiuywmmeqyjrndazjckewgpnpuzcjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395390.498473-631-209380361794556/AnsiballZ_command.py'
Dec 10 19:36:30 compute-0 sudo[98277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:30 compute-0 python3.9[98279]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:36:30 compute-0 ovs-vsctl[98280]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 10 19:36:30 compute-0 sudo[98277]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:31 compute-0 sshd-session[87208]: Connection closed by 192.168.122.30 port 52598
Dec 10 19:36:31 compute-0 sshd-session[87205]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:36:31 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Dec 10 19:36:31 compute-0 systemd[1]: session-20.scope: Consumed 46.615s CPU time.
Dec 10 19:36:31 compute-0 systemd-logind[789]: Session 20 logged out. Waiting for processes to exit.
Dec 10 19:36:31 compute-0 systemd-logind[789]: Removed session 20.
Dec 10 19:36:36 compute-0 sshd-session[98305]: Accepted publickey for zuul from 192.168.122.30 port 58896 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:36:36 compute-0 systemd-logind[789]: New session 22 of user zuul.
Dec 10 19:36:36 compute-0 systemd[1]: Started Session 22 of User zuul.
Dec 10 19:36:36 compute-0 sshd-session[98305]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:36:37 compute-0 python3.9[98458]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:36:38 compute-0 sudo[98612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuchbeiglyjsswkiohcrvdlrkqbdgpwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395398.3883007-34-66919969999543/AnsiballZ_file.py'
Dec 10 19:36:38 compute-0 sudo[98612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:38 compute-0 systemd[1]: Stopping User Manager for UID 0...
Dec 10 19:36:38 compute-0 systemd[97730]: Activating special unit Exit the Session...
Dec 10 19:36:38 compute-0 systemd[97730]: Stopped target Main User Target.
Dec 10 19:36:38 compute-0 systemd[97730]: Stopped target Basic System.
Dec 10 19:36:38 compute-0 systemd[97730]: Stopped target Paths.
Dec 10 19:36:38 compute-0 systemd[97730]: Stopped target Sockets.
Dec 10 19:36:38 compute-0 systemd[97730]: Stopped target Timers.
Dec 10 19:36:38 compute-0 systemd[97730]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 10 19:36:38 compute-0 systemd[97730]: Closed D-Bus User Message Bus Socket.
Dec 10 19:36:38 compute-0 systemd[97730]: Stopped Create User's Volatile Files and Directories.
Dec 10 19:36:38 compute-0 systemd[97730]: Removed slice User Application Slice.
Dec 10 19:36:38 compute-0 systemd[97730]: Reached target Shutdown.
Dec 10 19:36:38 compute-0 systemd[97730]: Finished Exit the Session.
Dec 10 19:36:38 compute-0 systemd[97730]: Reached target Exit the Session.
Dec 10 19:36:39 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Dec 10 19:36:39 compute-0 systemd[1]: Stopped User Manager for UID 0.
Dec 10 19:36:39 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 10 19:36:39 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 10 19:36:39 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 10 19:36:39 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 10 19:36:39 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Dec 10 19:36:39 compute-0 python3.9[98614]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:39 compute-0 sudo[98612]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:39 compute-0 sudo[98767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whhgicjolzagmgoxdahewyhbsfrferfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395399.2293243-34-16263075094758/AnsiballZ_file.py'
Dec 10 19:36:39 compute-0 sudo[98767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:39 compute-0 python3.9[98769]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:39 compute-0 sudo[98767]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:40 compute-0 sudo[98919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psltfsonqfcgszdknidetcdytyflyobo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395399.9027364-34-83890259543795/AnsiballZ_file.py'
Dec 10 19:36:40 compute-0 sudo[98919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:40 compute-0 python3.9[98921]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:40 compute-0 sudo[98919]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:40 compute-0 sudo[99071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhhmqjaloaempdzlpbzfpjgfubokfrgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395400.5882888-34-116108766593174/AnsiballZ_file.py'
Dec 10 19:36:40 compute-0 sudo[99071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:41 compute-0 python3.9[99073]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:41 compute-0 sudo[99071]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:41 compute-0 sudo[99223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rodxcyvgzahjgpiatlaowaqdagpuyzay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395401.2221167-34-82798781422695/AnsiballZ_file.py'
Dec 10 19:36:41 compute-0 sudo[99223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:41 compute-0 python3.9[99225]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:41 compute-0 sudo[99223]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:42 compute-0 python3.9[99375]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:36:43 compute-0 sudo[99525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feiwtkhnkdhusjsgmulzwsnpteqlohyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395402.685371-78-197394849768590/AnsiballZ_seboolean.py'
Dec 10 19:36:43 compute-0 sudo[99525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:43 compute-0 python3.9[99527]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 10 19:36:43 compute-0 sudo[99525]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:44 compute-0 python3.9[99677]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:45 compute-0 python3.9[99798]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395404.102703-86-115121723516981/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:46 compute-0 python3.9[99948]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:46 compute-0 python3.9[100069]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395405.6257613-101-183952895861871/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:47 compute-0 sudo[100220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywmflqikarpqtrwpbaxnuqbxqvseugdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395406.8333216-118-8202847060739/AnsiballZ_setup.py'
Dec 10 19:36:47 compute-0 sudo[100220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:47 compute-0 python3.9[100222]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:36:47 compute-0 sudo[100220]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:48 compute-0 sudo[100304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rplxhooeseusqncgdhpecidymudijzip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395406.8333216-118-8202847060739/AnsiballZ_dnf.py'
Dec 10 19:36:48 compute-0 sudo[100304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:48 compute-0 python3.9[100306]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:36:49 compute-0 sudo[100304]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:50 compute-0 sudo[100457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzoafdsxuoqkkmwvdldbcuxarktbnhox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395409.8041956-130-201087411550772/AnsiballZ_systemd.py'
Dec 10 19:36:50 compute-0 sudo[100457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:50 compute-0 python3.9[100459]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 10 19:36:50 compute-0 sudo[100457]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:51 compute-0 python3.9[100612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:51 compute-0 python3.9[100733]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395410.986578-138-171914453277856/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:52 compute-0 python3.9[100883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:52 compute-0 python3.9[101004]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395412.065851-138-144257535041599/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:54 compute-0 python3.9[101154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:54 compute-0 python3.9[101275]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395413.654151-182-204770500595446/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:55 compute-0 python3.9[101425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:55 compute-0 python3.9[101546]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395414.820306-182-36249368362282/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:56 compute-0 python3.9[101696]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:36:56 compute-0 sudo[101848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwktumjavskdltncqvwpfcovtxvtrjfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395416.512233-220-67999473205267/AnsiballZ_file.py'
Dec 10 19:36:56 compute-0 sudo[101848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:56 compute-0 python3.9[101850]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:56 compute-0 sudo[101848]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:57 compute-0 sudo[102000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qipuwykhtmismaouuzyccpqesfuomhlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395417.1472282-228-80382125448270/AnsiballZ_stat.py'
Dec 10 19:36:57 compute-0 sudo[102000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:57 compute-0 python3.9[102002]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:57 compute-0 sudo[102000]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:57 compute-0 sudo[102078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfcwulfsorjnbquvkacvhlllcipqmpyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395417.1472282-228-80382125448270/AnsiballZ_file.py'
Dec 10 19:36:57 compute-0 sudo[102078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:58 compute-0 python3.9[102080]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:58 compute-0 sudo[102078]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:58 compute-0 sudo[102230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnrspwyhneojpufftiqrvzsgttepnhty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395418.2140126-228-67701746005618/AnsiballZ_stat.py'
Dec 10 19:36:58 compute-0 sudo[102230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:58 compute-0 python3.9[102232]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:36:58 compute-0 sudo[102230]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:58 compute-0 sudo[102323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmzrnslpriwbgkbiiarkqytdsqgxside ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395418.2140126-228-67701746005618/AnsiballZ_file.py'
Dec 10 19:36:58 compute-0 sudo[102323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:58 compute-0 ovn_controller[97701]: 2025-12-10T19:36:58Z|00025|memory|INFO|16128 kB peak resident set size after 30.0 seconds
Dec 10 19:36:58 compute-0 ovn_controller[97701]: 2025-12-10T19:36:58Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec 10 19:36:58 compute-0 podman[102282]: 2025-12-10 19:36:58.95601069 +0000 UTC m=+0.087994266 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 10 19:36:59 compute-0 python3.9[102329]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:36:59 compute-0 sudo[102323]: pam_unix(sudo:session): session closed for user root
Dec 10 19:36:59 compute-0 sudo[102486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iddtafbhczlofczzekjexyinjputysko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395419.265039-251-111578103132039/AnsiballZ_file.py'
Dec 10 19:36:59 compute-0 sudo[102486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:36:59 compute-0 python3.9[102488]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:36:59 compute-0 sudo[102486]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:00 compute-0 sudo[102638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxdkzxjvorkzxuxnhkrxhycablmehmdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395419.9033208-259-232087828785235/AnsiballZ_stat.py'
Dec 10 19:37:00 compute-0 sudo[102638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:00 compute-0 python3.9[102640]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:37:00 compute-0 sudo[102638]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:00 compute-0 sudo[102716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btsxkslzfzwkbdatlkeuofzpnostlfqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395419.9033208-259-232087828785235/AnsiballZ_file.py'
Dec 10 19:37:00 compute-0 sudo[102716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:00 compute-0 python3.9[102718]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:00 compute-0 sudo[102716]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:01 compute-0 sudo[102868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsgvngmjvckehxwziaaliiatcdhaievq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395421.1582236-271-89032422244003/AnsiballZ_stat.py'
Dec 10 19:37:01 compute-0 sudo[102868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:01 compute-0 python3.9[102870]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:37:01 compute-0 sudo[102868]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:02 compute-0 sudo[102946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrypuwtzzslupxqdrrmxuivmnvkdcsgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395421.1582236-271-89032422244003/AnsiballZ_file.py'
Dec 10 19:37:02 compute-0 sudo[102946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:02 compute-0 python3.9[102948]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:02 compute-0 sudo[102946]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:02 compute-0 sudo[103098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plkvrvswpbiedxvwdssdnskofpbcafpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395422.4015043-283-128724907320406/AnsiballZ_systemd.py'
Dec 10 19:37:02 compute-0 sudo[103098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:02 compute-0 python3.9[103100]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:37:03 compute-0 systemd[1]: Reloading.
Dec 10 19:37:03 compute-0 systemd-rc-local-generator[103127]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:37:03 compute-0 systemd-sysv-generator[103130]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:37:03 compute-0 sudo[103098]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:03 compute-0 sudo[103287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqajuzkwevqqlseocmfqijuymuuvxyby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395423.4345455-291-262508385132710/AnsiballZ_stat.py'
Dec 10 19:37:03 compute-0 sudo[103287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:03 compute-0 python3.9[103289]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:37:03 compute-0 sudo[103287]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:04 compute-0 sudo[103365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kphgtgsehtjhvvapkglwmcfycoiyaxqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395423.4345455-291-262508385132710/AnsiballZ_file.py'
Dec 10 19:37:04 compute-0 sudo[103365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:04 compute-0 python3.9[103367]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:04 compute-0 sudo[103365]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:04 compute-0 sudo[103517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtnyatxpfnwrfxabkdaqbnntiamtvgaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395424.5168664-303-161207496334453/AnsiballZ_stat.py'
Dec 10 19:37:04 compute-0 sudo[103517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:04 compute-0 python3.9[103519]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:37:05 compute-0 sudo[103517]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:05 compute-0 sudo[103595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hggfbmlcqpbkdwnfwtxeajeatkczbipj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395424.5168664-303-161207496334453/AnsiballZ_file.py'
Dec 10 19:37:05 compute-0 sudo[103595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:05 compute-0 python3.9[103597]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:05 compute-0 sudo[103595]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:05 compute-0 sudo[103747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xujkqtgytssnbsyzlsryyvnfevxloofx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395425.6390421-315-74899883461577/AnsiballZ_systemd.py'
Dec 10 19:37:05 compute-0 sudo[103747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:06 compute-0 python3.9[103749]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:37:06 compute-0 systemd[1]: Reloading.
Dec 10 19:37:06 compute-0 systemd-rc-local-generator[103778]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:37:06 compute-0 systemd-sysv-generator[103781]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:37:06 compute-0 systemd[1]: Starting Create netns directory...
Dec 10 19:37:06 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 10 19:37:06 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 10 19:37:06 compute-0 systemd[1]: Finished Create netns directory.
Dec 10 19:37:06 compute-0 sudo[103747]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:07 compute-0 sudo[103940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azrmivbljkzmptmdzfbrhxzmfyopbwsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395426.8119411-325-65428102340378/AnsiballZ_file.py'
Dec 10 19:37:07 compute-0 sudo[103940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:07 compute-0 python3.9[103942]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:37:07 compute-0 sudo[103940]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:07 compute-0 sudo[104092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qllbrgbxowgqewftgpcinsarupeisruf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395427.4302905-333-140864952342407/AnsiballZ_stat.py'
Dec 10 19:37:07 compute-0 sudo[104092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:07 compute-0 python3.9[104094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:37:07 compute-0 sudo[104092]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:08 compute-0 sudo[104215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzywjhvkneuetrmqbjrytigyqorvlpiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395427.4302905-333-140864952342407/AnsiballZ_copy.py'
Dec 10 19:37:08 compute-0 sudo[104215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:08 compute-0 python3.9[104217]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395427.4302905-333-140864952342407/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:37:08 compute-0 sudo[104215]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:09 compute-0 sudo[104367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rueesdcwdeezolqqtsoglzcefauxuxcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395428.8002355-350-101080419488333/AnsiballZ_file.py'
Dec 10 19:37:09 compute-0 sudo[104367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:09 compute-0 python3.9[104369]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:37:09 compute-0 sudo[104367]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:09 compute-0 sudo[104519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiebrejerepkzezgilwjnluvxnrnzyvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395429.4709942-358-252381090216262/AnsiballZ_stat.py'
Dec 10 19:37:09 compute-0 sudo[104519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:09 compute-0 python3.9[104521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:37:09 compute-0 sudo[104519]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:10 compute-0 sudo[104642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzomryrxunmbqfeddhmvbquusddlxfza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395429.4709942-358-252381090216262/AnsiballZ_copy.py'
Dec 10 19:37:10 compute-0 sudo[104642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:10 compute-0 python3.9[104644]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395429.4709942-358-252381090216262/.source.json _original_basename=.qivr_d2b follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:10 compute-0 sudo[104642]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:10 compute-0 sudo[104794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pamjpxeuylildtpddmipgwwroqxzixki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395430.5522723-373-26140758041038/AnsiballZ_file.py'
Dec 10 19:37:10 compute-0 sudo[104794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:10 compute-0 python3.9[104796]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:11 compute-0 sudo[104794]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:11 compute-0 sudo[104946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzxbqpobnhzqmwrsfofsyyobzyiywlkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395431.192345-381-27956158605458/AnsiballZ_stat.py'
Dec 10 19:37:11 compute-0 sudo[104946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:11 compute-0 sudo[104946]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:11 compute-0 sudo[105069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmeyfgfwpllwxmoijgmvoltltpagtjpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395431.192345-381-27956158605458/AnsiballZ_copy.py'
Dec 10 19:37:11 compute-0 sudo[105069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:12 compute-0 sudo[105069]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:12 compute-0 sudo[105221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqavonrydnklqyefkxikzgexgdiaqtqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395432.4702346-398-163417284207111/AnsiballZ_container_config_data.py'
Dec 10 19:37:12 compute-0 sudo[105221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:13 compute-0 python3.9[105223]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 10 19:37:13 compute-0 sudo[105221]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:13 compute-0 sudo[105373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdvtwtsjjvjnvomzduodqiligxekpptd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395433.4101-407-74427209188219/AnsiballZ_container_config_hash.py'
Dec 10 19:37:13 compute-0 sudo[105373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:14 compute-0 python3.9[105375]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 10 19:37:14 compute-0 sudo[105373]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:14 compute-0 sudo[105525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdlzcnoqlmzkabgfvjtovznbupgmqlqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395434.2869391-416-231164453438524/AnsiballZ_podman_container_info.py'
Dec 10 19:37:14 compute-0 sudo[105525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:14 compute-0 python3.9[105527]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 10 19:37:15 compute-0 sudo[105525]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:16 compute-0 sudo[105704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rehjeqxdnybbqqgkqgbfvmwppvfdzore ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765395435.5672987-429-4175569421420/AnsiballZ_edpm_container_manage.py'
Dec 10 19:37:16 compute-0 sudo[105704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:16 compute-0 python3[105706]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 10 19:37:16 compute-0 podman[105744]: 2025-12-10 19:37:16.487627576 +0000 UTC m=+0.047854112 container create 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 10 19:37:16 compute-0 podman[105744]: 2025-12-10 19:37:16.461279395 +0000 UTC m=+0.021505941 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 10 19:37:16 compute-0 python3[105706]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 10 19:37:16 compute-0 sudo[105704]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:17 compute-0 sudo[105932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvqxabwrhodrdqiyxzgviyksdcjbykxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395436.7952397-437-278329941341910/AnsiballZ_stat.py'
Dec 10 19:37:17 compute-0 sudo[105932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:17 compute-0 python3.9[105934]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:37:17 compute-0 sudo[105932]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:17 compute-0 sudo[106086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtsxioyplhquxgcafpmasgbaczpdxawe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395437.620533-446-164257693056293/AnsiballZ_file.py'
Dec 10 19:37:17 compute-0 sudo[106086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:18 compute-0 python3.9[106088]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:18 compute-0 sudo[106086]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:18 compute-0 sudo[106162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvnrtwkishklojdgmqyqdwmnrjcuynpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395437.620533-446-164257693056293/AnsiballZ_stat.py'
Dec 10 19:37:18 compute-0 sudo[106162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:18 compute-0 python3.9[106164]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:37:18 compute-0 sudo[106162]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:19 compute-0 sudo[106313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kafspkxwjrcjmumptjxppnukxehlxlgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395438.6796746-446-221636053149490/AnsiballZ_copy.py'
Dec 10 19:37:19 compute-0 sudo[106313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:19 compute-0 python3.9[106315]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765395438.6796746-446-221636053149490/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:19 compute-0 sudo[106313]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:19 compute-0 sudo[106389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxlnuobkdvvvgtbbicxrtrjnfnsdllmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395438.6796746-446-221636053149490/AnsiballZ_systemd.py'
Dec 10 19:37:19 compute-0 sudo[106389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:19 compute-0 python3.9[106391]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:37:19 compute-0 systemd[1]: Reloading.
Dec 10 19:37:20 compute-0 systemd-sysv-generator[106420]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:37:20 compute-0 systemd-rc-local-generator[106417]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:37:20 compute-0 sudo[106389]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:20 compute-0 sudo[106500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifpkfibkfhiprbljqoyxdalukwdvhrjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395438.6796746-446-221636053149490/AnsiballZ_systemd.py'
Dec 10 19:37:20 compute-0 sudo[106500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:20 compute-0 python3.9[106502]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:37:20 compute-0 systemd[1]: Reloading.
Dec 10 19:37:20 compute-0 systemd-rc-local-generator[106531]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:37:20 compute-0 systemd-sysv-generator[106535]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:37:21 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec 10 19:37:21 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ec873c1a99a03dafcea04c07a55d5f69393159b05a808e6ce8d1f90724988f/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 10 19:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ec873c1a99a03dafcea04c07a55d5f69393159b05a808e6ce8d1f90724988f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 10 19:37:21 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69.
Dec 10 19:37:21 compute-0 podman[106543]: 2025-12-10 19:37:21.311019859 +0000 UTC m=+0.137112002 container init 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: + sudo -E kolla_set_configs
Dec 10 19:37:21 compute-0 podman[106543]: 2025-12-10 19:37:21.342467637 +0000 UTC m=+0.168559760 container start 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 10 19:37:21 compute-0 edpm-start-podman-container[106543]: ovn_metadata_agent
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Validating config file
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Copying service configuration files
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Writing out command to execute
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 10 19:37:21 compute-0 podman[106566]: 2025-12-10 19:37:21.412108937 +0000 UTC m=+0.053766511 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: ++ cat /run_command
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: + CMD=neutron-ovn-metadata-agent
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: + ARGS=
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: + sudo kolla_copy_cacerts
Dec 10 19:37:21 compute-0 edpm-start-podman-container[106542]: Creating additional drop-in dependency for "ovn_metadata_agent" (6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69)
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: + [[ ! -n '' ]]
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: + . kolla_extend_start
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: Running command: 'neutron-ovn-metadata-agent'
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: + umask 0022
Dec 10 19:37:21 compute-0 ovn_metadata_agent[106559]: + exec neutron-ovn-metadata-agent
Dec 10 19:37:21 compute-0 systemd[1]: Reloading.
Dec 10 19:37:21 compute-0 systemd-rc-local-generator[106632]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:37:21 compute-0 systemd-sysv-generator[106638]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:37:21 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec 10 19:37:21 compute-0 sudo[106500]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:22 compute-0 sshd-session[98308]: Connection closed by 192.168.122.30 port 58896
Dec 10 19:37:22 compute-0 sshd-session[98305]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:37:22 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Dec 10 19:37:22 compute-0 systemd[1]: session-22.scope: Consumed 34.300s CPU time.
Dec 10 19:37:22 compute-0 systemd-logind[789]: Session 22 logged out. Waiting for processes to exit.
Dec 10 19:37:22 compute-0 systemd-logind[789]: Removed session 22.
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.291 106564 INFO neutron.common.config [-] Logging enabled!
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.292 106564 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.292 106564 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.292 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.293 106564 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.293 106564 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.293 106564 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.293 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.293 106564 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.293 106564 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.294 106564 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.294 106564 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.294 106564 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.294 106564 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.294 106564 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.294 106564 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.294 106564 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.295 106564 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.295 106564 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.295 106564 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.295 106564 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.295 106564 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.295 106564 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.296 106564 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.296 106564 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.296 106564 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.296 106564 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.296 106564 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.296 106564 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.296 106564 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.297 106564 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.297 106564 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.297 106564 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.297 106564 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.297 106564 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.297 106564 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.297 106564 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.298 106564 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.298 106564 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.298 106564 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.298 106564 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.298 106564 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.298 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.299 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.299 106564 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.299 106564 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.299 106564 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.299 106564 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.299 106564 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.299 106564 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.299 106564 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.300 106564 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.300 106564 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.300 106564 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.300 106564 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.300 106564 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.300 106564 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.300 106564 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.301 106564 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.301 106564 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.301 106564 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.301 106564 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.301 106564 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.301 106564 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.302 106564 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.302 106564 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.302 106564 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.302 106564 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.302 106564 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.302 106564 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.302 106564 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.303 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.303 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.303 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.303 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.303 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.303 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.303 106564 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.304 106564 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.304 106564 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.304 106564 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.304 106564 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.304 106564 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.304 106564 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.304 106564 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.305 106564 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.305 106564 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.305 106564 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.305 106564 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.305 106564 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.305 106564 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.306 106564 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.306 106564 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.306 106564 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.306 106564 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.306 106564 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.306 106564 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.306 106564 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.307 106564 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.307 106564 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.307 106564 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.307 106564 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.307 106564 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.307 106564 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.307 106564 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.308 106564 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.308 106564 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.308 106564 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.308 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.308 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.308 106564 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.309 106564 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.309 106564 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.309 106564 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.309 106564 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.309 106564 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.309 106564 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.310 106564 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.310 106564 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.310 106564 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.310 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.310 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.310 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.310 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.311 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.311 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.311 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.311 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.311 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.311 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.312 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.312 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.312 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.312 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.312 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.312 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.312 106564 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.313 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.313 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.313 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.313 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.313 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.313 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.314 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.314 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.314 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.314 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.314 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.314 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.314 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.315 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.315 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.315 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.315 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.315 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.315 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.315 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.316 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.316 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.316 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.316 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.316 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.316 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.316 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.317 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.317 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.317 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.317 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.317 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.317 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.318 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.318 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.318 106564 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.318 106564 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.318 106564 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.318 106564 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.318 106564 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.319 106564 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.319 106564 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.319 106564 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.319 106564 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.319 106564 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.319 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.320 106564 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.320 106564 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.320 106564 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.320 106564 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.320 106564 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.320 106564 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.320 106564 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.321 106564 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.321 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.321 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.321 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.321 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.321 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.322 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.322 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.322 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.322 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.322 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.322 106564 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.322 106564 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.323 106564 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.323 106564 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.323 106564 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.323 106564 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.323 106564 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.323 106564 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.323 106564 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.324 106564 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.324 106564 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.324 106564 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.324 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.324 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.324 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.324 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.325 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.325 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.325 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.325 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.325 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.325 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.326 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.326 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.326 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.326 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.326 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.326 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.326 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.327 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.327 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.327 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.327 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.327 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.327 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.327 106564 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.328 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.328 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.328 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.328 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.328 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.328 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.329 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.329 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.329 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.329 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.329 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.329 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.329 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.330 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.330 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.330 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.330 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.330 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.330 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.331 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.331 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.331 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.331 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.331 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.331 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.331 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.332 106564 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.332 106564 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.332 106564 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.332 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.332 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.332 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.333 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.333 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.333 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.333 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.333 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.333 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.333 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.334 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.334 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.334 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.334 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.334 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.334 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.335 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.335 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.335 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.335 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.335 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.335 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.335 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.336 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.336 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.336 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.336 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.336 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.337 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.337 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.337 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.338 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.338 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.338 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.338 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.338 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.339 106564 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.339 106564 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.356 106564 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.356 106564 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.356 106564 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.357 106564 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.357 106564 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.373 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7 (UUID: 7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.409 106564 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.409 106564 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.410 106564 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.410 106564 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.414 106564 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.421 106564 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.428 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], external_ids={}, name=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, nb_cfg_timestamp=1765395396929, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.430 106564 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f97ccbf5160>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.431 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.431 106564 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.432 106564 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.432 106564 INFO oslo_service.service [-] Starting 1 workers
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.438 106564 DEBUG oslo_service.service [-] Started child 106671 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.442 106671 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-367422'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.443 106564 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpnyw2ydwm/privsep.sock']
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.465 106671 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.466 106671 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.466 106671 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.470 106671 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.475 106671 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 10 19:37:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.481 106671 INFO eventlet.wsgi.server [-] (106671) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 10 19:37:23 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 10 19:37:24 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:24.102 106564 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 10 19:37:24 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:24.102 106564 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpnyw2ydwm/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 10 19:37:24 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.972 106676 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 10 19:37:24 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.978 106676 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 10 19:37:24 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.980 106676 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 10 19:37:24 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:23.981 106676 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106676
Dec 10 19:37:24 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:24.105 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2a044a-fd8f-49bf-9eab-73d0141ba026]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:37:24 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:24.608 106676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:37:24 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:24.609 106676 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:37:24 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:24.609 106676 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.175 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[03624330-1d49-41db-8a97-f569a1996998]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.178 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, column=external_ids, values=({'neutron:ovn-metadata-id': '8a909af6-ec12-5c32-9832-1fce6fccbcb7'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.211 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.220 106564 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.221 106564 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.221 106564 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.221 106564 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.221 106564 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.221 106564 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.221 106564 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.221 106564 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.222 106564 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.222 106564 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.222 106564 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.222 106564 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.222 106564 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.222 106564 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.222 106564 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.222 106564 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.223 106564 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.223 106564 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.223 106564 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.223 106564 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.223 106564 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.223 106564 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.223 106564 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.223 106564 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.223 106564 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.224 106564 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.224 106564 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.224 106564 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.224 106564 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.224 106564 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.224 106564 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.224 106564 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.224 106564 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.225 106564 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.225 106564 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.225 106564 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.225 106564 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.226 106564 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.226 106564 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.226 106564 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.226 106564 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.226 106564 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.226 106564 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.226 106564 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.226 106564 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.226 106564 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.227 106564 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.227 106564 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.227 106564 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.227 106564 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.227 106564 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.227 106564 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.227 106564 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.227 106564 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.227 106564 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.228 106564 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.228 106564 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.228 106564 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.228 106564 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.228 106564 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.228 106564 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.228 106564 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.228 106564 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.228 106564 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.229 106564 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.229 106564 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.229 106564 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.229 106564 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.229 106564 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.229 106564 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.229 106564 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.229 106564 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.229 106564 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.230 106564 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.230 106564 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.230 106564 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.230 106564 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.230 106564 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.230 106564 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.230 106564 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.230 106564 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.230 106564 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.230 106564 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.231 106564 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.231 106564 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.231 106564 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.231 106564 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.231 106564 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.231 106564 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.231 106564 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.231 106564 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.231 106564 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.232 106564 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.232 106564 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.232 106564 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.232 106564 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.232 106564 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.232 106564 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.232 106564 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.232 106564 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.232 106564 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.232 106564 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.232 106564 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.233 106564 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.233 106564 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.233 106564 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.233 106564 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.233 106564 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.233 106564 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.233 106564 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.233 106564 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.233 106564 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.234 106564 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.234 106564 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.234 106564 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.234 106564 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.234 106564 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.234 106564 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.234 106564 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.234 106564 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.234 106564 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.235 106564 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.235 106564 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.235 106564 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.235 106564 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.235 106564 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.235 106564 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.235 106564 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.235 106564 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.235 106564 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.236 106564 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.236 106564 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.236 106564 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.236 106564 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.236 106564 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.236 106564 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.236 106564 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.236 106564 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.236 106564 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.237 106564 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.237 106564 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.237 106564 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.237 106564 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.237 106564 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.237 106564 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.237 106564 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.237 106564 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.237 106564 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.237 106564 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.238 106564 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.238 106564 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.238 106564 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.238 106564 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.238 106564 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.238 106564 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.238 106564 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.238 106564 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.238 106564 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.238 106564 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.238 106564 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.239 106564 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.239 106564 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.239 106564 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.239 106564 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.239 106564 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.239 106564 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.239 106564 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.239 106564 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.239 106564 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.239 106564 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.240 106564 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.240 106564 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.240 106564 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.240 106564 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.240 106564 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.240 106564 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.240 106564 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.240 106564 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.240 106564 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.240 106564 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.241 106564 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.241 106564 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.241 106564 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.241 106564 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.241 106564 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.241 106564 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.241 106564 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.241 106564 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.241 106564 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.242 106564 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.242 106564 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.242 106564 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.242 106564 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.242 106564 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.242 106564 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.242 106564 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.242 106564 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.242 106564 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.243 106564 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.243 106564 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.243 106564 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.243 106564 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.243 106564 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.243 106564 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.243 106564 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.244 106564 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.244 106564 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.244 106564 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.244 106564 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.244 106564 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.244 106564 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.244 106564 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.244 106564 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.244 106564 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.245 106564 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.245 106564 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.245 106564 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.245 106564 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.245 106564 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.245 106564 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.245 106564 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.245 106564 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.246 106564 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.246 106564 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.246 106564 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.246 106564 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.246 106564 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.246 106564 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.246 106564 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.246 106564 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.246 106564 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.247 106564 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.247 106564 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.247 106564 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.247 106564 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.247 106564 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.247 106564 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.247 106564 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.247 106564 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.248 106564 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.248 106564 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.248 106564 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.248 106564 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.248 106564 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.248 106564 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.248 106564 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.248 106564 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.249 106564 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.249 106564 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.249 106564 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.249 106564 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.249 106564 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.249 106564 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.249 106564 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.250 106564 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.250 106564 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.250 106564 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.250 106564 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.250 106564 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.250 106564 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.250 106564 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.251 106564 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.251 106564 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.251 106564 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.251 106564 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.251 106564 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.251 106564 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.252 106564 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.252 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.252 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.252 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.252 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.252 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.252 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.253 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.253 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.253 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.253 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.253 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.253 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.253 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.254 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.254 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.254 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.254 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.254 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.254 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.255 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.255 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.255 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.255 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.255 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.255 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.255 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.256 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.256 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.256 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.256 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.256 106564 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.256 106564 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.257 106564 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.257 106564 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.257 106564 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:37:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:37:25.257 106564 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 10 19:37:27 compute-0 sshd-session[106681]: Accepted publickey for zuul from 192.168.122.30 port 33628 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:37:27 compute-0 systemd-logind[789]: New session 23 of user zuul.
Dec 10 19:37:27 compute-0 systemd[1]: Started Session 23 of User zuul.
Dec 10 19:37:27 compute-0 sshd-session[106681]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:37:28 compute-0 python3.9[106834]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:37:29 compute-0 podman[106863]: 2025-12-10 19:37:29.106883225 +0000 UTC m=+0.090415777 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 10 19:37:29 compute-0 sudo[107017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhsadwhhyjrpsbalvfswvwprvqhhovlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395449.0712655-34-55114821895680/AnsiballZ_command.py'
Dec 10 19:37:29 compute-0 sudo[107017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:29 compute-0 python3.9[107019]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:37:29 compute-0 sudo[107017]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:30 compute-0 sudo[107182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmhpalevlkeawijdyzqrnkszbromuptr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395449.9962797-45-10307193992499/AnsiballZ_systemd_service.py'
Dec 10 19:37:30 compute-0 sudo[107182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:30 compute-0 python3.9[107184]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:37:30 compute-0 systemd[1]: Reloading.
Dec 10 19:37:30 compute-0 systemd-rc-local-generator[107212]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:37:30 compute-0 systemd-sysv-generator[107216]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:37:31 compute-0 sudo[107182]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:31 compute-0 python3.9[107369]: ansible-ansible.builtin.service_facts Invoked
Dec 10 19:37:31 compute-0 network[107386]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 10 19:37:31 compute-0 network[107387]: 'network-scripts' will be removed from distribution in near future.
Dec 10 19:37:31 compute-0 network[107388]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 10 19:37:37 compute-0 sudo[107647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twiftwjxdjkqbumiwvcprmrhjrxvbdxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395456.85156-64-29386899837202/AnsiballZ_systemd_service.py'
Dec 10 19:37:37 compute-0 sudo[107647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:37 compute-0 python3.9[107649]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:37:37 compute-0 sudo[107647]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:38 compute-0 sudo[107800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcbbddpuffhmvakrixrqokjepgueuhro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395457.716463-64-129452360336720/AnsiballZ_systemd_service.py'
Dec 10 19:37:38 compute-0 sudo[107800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:38 compute-0 python3.9[107802]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:37:38 compute-0 sudo[107800]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:38 compute-0 sudo[107953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttzskyzljeytlnpeimkdjkfdvdmklqdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395458.4631839-64-263800233494834/AnsiballZ_systemd_service.py'
Dec 10 19:37:38 compute-0 sudo[107953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:39 compute-0 python3.9[107955]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:37:39 compute-0 sudo[107953]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:39 compute-0 sudo[108106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnndkrgghfxsirmurteilehscdcybdkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395459.202548-64-111974803192863/AnsiballZ_systemd_service.py'
Dec 10 19:37:39 compute-0 sudo[108106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:39 compute-0 python3.9[108108]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:37:39 compute-0 sudo[108106]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:40 compute-0 sudo[108259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aseadpzeraztbcpezemzhvuzlhdvqfel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395459.938402-64-98260378573920/AnsiballZ_systemd_service.py'
Dec 10 19:37:40 compute-0 sudo[108259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:40 compute-0 python3.9[108261]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:37:40 compute-0 sudo[108259]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:40 compute-0 sudo[108412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcgrqfesowogqfflyxnkaclsulztyuuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395460.7109694-64-84273119198628/AnsiballZ_systemd_service.py'
Dec 10 19:37:40 compute-0 sudo[108412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:41 compute-0 python3.9[108414]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:37:41 compute-0 sudo[108412]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:41 compute-0 sudo[108565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trokkvrfswagpehidjsqqnrixranbles ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395461.4015276-64-127021559527769/AnsiballZ_systemd_service.py'
Dec 10 19:37:41 compute-0 sudo[108565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:42 compute-0 python3.9[108567]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:37:42 compute-0 sudo[108565]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:42 compute-0 sudo[108718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxlzimbznhotmiiassdslzcsyhoomfuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395462.4581013-116-105832137040602/AnsiballZ_file.py'
Dec 10 19:37:42 compute-0 sudo[108718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:43 compute-0 python3.9[108720]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:43 compute-0 sudo[108718]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:43 compute-0 sudo[108870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypuxgpeoxdsxvrxvlhwvnzfaddhvcjzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395463.2671108-116-101261460648405/AnsiballZ_file.py'
Dec 10 19:37:43 compute-0 sudo[108870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:43 compute-0 python3.9[108872]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:43 compute-0 sudo[108870]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:44 compute-0 sudo[109022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agoxbtrjyhpgumglyfqfwkrwjzkhmyvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395463.804957-116-262203872539417/AnsiballZ_file.py'
Dec 10 19:37:44 compute-0 sudo[109022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:44 compute-0 python3.9[109024]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:44 compute-0 sudo[109022]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:44 compute-0 sudo[109174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hscvffxnxlnnqgfpbgdnadsvpktwpugp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395464.4429502-116-139592011639571/AnsiballZ_file.py'
Dec 10 19:37:44 compute-0 sudo[109174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:44 compute-0 python3.9[109176]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:44 compute-0 sudo[109174]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:45 compute-0 sudo[109326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apryediyysuexadxkppxpkqyufgyrmdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395465.0293124-116-237241577547938/AnsiballZ_file.py'
Dec 10 19:37:45 compute-0 sudo[109326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:45 compute-0 python3.9[109328]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:45 compute-0 sudo[109326]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:45 compute-0 sudo[109478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omlxcrkexnedmgjrjciiqmuammfbpsrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395465.663697-116-182065902575408/AnsiballZ_file.py'
Dec 10 19:37:45 compute-0 sudo[109478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:46 compute-0 python3.9[109480]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:46 compute-0 sudo[109478]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:46 compute-0 sudo[109630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aauqyxgwvsqsnibegykqwrirmqarszbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395466.350369-116-121440736438481/AnsiballZ_file.py'
Dec 10 19:37:46 compute-0 sudo[109630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:46 compute-0 python3.9[109632]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:46 compute-0 sudo[109630]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:47 compute-0 sudo[109782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyyxuhrwxnecotzggqvkrqzftucmfrpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395467.0565515-166-191254252229477/AnsiballZ_file.py'
Dec 10 19:37:47 compute-0 sudo[109782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:47 compute-0 python3.9[109784]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:47 compute-0 sudo[109782]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:48 compute-0 sudo[109934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynvnapbldktxzobdfphgaowdoazqgdra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395467.7172916-166-85537742741798/AnsiballZ_file.py'
Dec 10 19:37:48 compute-0 sudo[109934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:48 compute-0 python3.9[109936]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:48 compute-0 sudo[109934]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:48 compute-0 sudo[110086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geylvoeufrgepvizhueqirvawpayiasv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395468.4502685-166-94783923443915/AnsiballZ_file.py'
Dec 10 19:37:48 compute-0 sudo[110086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:48 compute-0 python3.9[110088]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:48 compute-0 sudo[110086]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:49 compute-0 sudo[110238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiqeocwvkyzutmzposijxmgfdzvwewqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395469.0789373-166-208952208289031/AnsiballZ_file.py'
Dec 10 19:37:49 compute-0 sudo[110238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:49 compute-0 python3.9[110240]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:49 compute-0 sudo[110238]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:50 compute-0 sudo[110390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryqpkbbpzjddpyqaapcxyjjljdqhqksr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395469.761599-166-226119004619056/AnsiballZ_file.py'
Dec 10 19:37:50 compute-0 sudo[110390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:50 compute-0 python3.9[110392]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:50 compute-0 sudo[110390]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:50 compute-0 sudo[110542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcgptlwnoehviskvarjcmpqqplmzyglv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395470.4960814-166-273261672526245/AnsiballZ_file.py'
Dec 10 19:37:50 compute-0 sudo[110542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:50 compute-0 python3.9[110544]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:51 compute-0 sudo[110542]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:51 compute-0 sudo[110694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxqvknamzzwdwvcopirtvcqkjjyvhyul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395471.1518862-166-182620062287362/AnsiballZ_file.py'
Dec 10 19:37:51 compute-0 sudo[110694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:51 compute-0 podman[110696]: 2025-12-10 19:37:51.554638588 +0000 UTC m=+0.092839031 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec 10 19:37:51 compute-0 python3.9[110697]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:37:51 compute-0 sudo[110694]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:52 compute-0 sudo[110867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqesznjrhyacnvtzxelnvqfrfxuevjtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395471.882943-217-231486570627484/AnsiballZ_command.py'
Dec 10 19:37:52 compute-0 sudo[110867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:52 compute-0 python3.9[110869]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:37:52 compute-0 sudo[110867]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:53 compute-0 python3.9[111021]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 10 19:37:53 compute-0 sudo[111171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfeiehcxcdvqcrrzqcjepnezfgtmchzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395473.4530985-235-198304379344535/AnsiballZ_systemd_service.py'
Dec 10 19:37:53 compute-0 sudo[111171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:54 compute-0 python3.9[111173]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:37:54 compute-0 systemd[1]: Reloading.
Dec 10 19:37:54 compute-0 systemd-rc-local-generator[111196]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:37:54 compute-0 systemd-sysv-generator[111201]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:37:54 compute-0 sudo[111171]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:54 compute-0 sudo[111358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhrhwytxzdfncxetqlfsnpflcwnfzdfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395474.4981427-243-138449678434945/AnsiballZ_command.py'
Dec 10 19:37:54 compute-0 sudo[111358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:54 compute-0 python3.9[111360]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:37:54 compute-0 sudo[111358]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:55 compute-0 sudo[111511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyxzzcuhraejqrhpeiarhlscjkmwkadc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395475.1228616-243-20071869095313/AnsiballZ_command.py'
Dec 10 19:37:55 compute-0 sudo[111511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:55 compute-0 python3.9[111513]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:37:55 compute-0 sudo[111511]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:56 compute-0 sudo[111664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sevtzzjlbdmlwjatzcttrrwzoalcygba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395475.7600126-243-129964861250592/AnsiballZ_command.py'
Dec 10 19:37:56 compute-0 sudo[111664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:56 compute-0 python3.9[111666]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:37:56 compute-0 sudo[111664]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:56 compute-0 sudo[111817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heydwrtjlyzivmnvtiiqtvgcaztgmrhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395476.4291742-243-186703250868519/AnsiballZ_command.py'
Dec 10 19:37:56 compute-0 sudo[111817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:56 compute-0 python3.9[111819]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:37:56 compute-0 sudo[111817]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:57 compute-0 sudo[111970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geeyqwwqqzuwppouymylynktslpouwsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395477.1382895-243-57760173537914/AnsiballZ_command.py'
Dec 10 19:37:57 compute-0 sudo[111970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:57 compute-0 python3.9[111972]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:37:57 compute-0 sudo[111970]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:58 compute-0 sudo[112123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kacocbtjzolfsjyapezkgdrmniatmqpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395477.899104-243-188510721194982/AnsiballZ_command.py'
Dec 10 19:37:58 compute-0 sudo[112123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:58 compute-0 python3.9[112125]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:37:58 compute-0 sudo[112123]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:58 compute-0 sudo[112276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygnaufnfixdpguhfvixbgnvflictqkjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395478.5691876-243-163460341688312/AnsiballZ_command.py'
Dec 10 19:37:58 compute-0 sudo[112276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:37:59 compute-0 python3.9[112278]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:37:59 compute-0 sudo[112276]: pam_unix(sudo:session): session closed for user root
Dec 10 19:37:59 compute-0 podman[112280]: 2025-12-10 19:37:59.27968186 +0000 UTC m=+0.109348207 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Dec 10 19:37:59 compute-0 sudo[112457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwlnjmptvjocwfowytvfxrueavjtbbdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395479.5293884-297-75503711397763/AnsiballZ_getent.py'
Dec 10 19:37:59 compute-0 sudo[112457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:38:00 compute-0 python3.9[112459]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 10 19:38:00 compute-0 sudo[112457]: pam_unix(sudo:session): session closed for user root
Dec 10 19:38:00 compute-0 sudo[112610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egitfvgnsaegfipobiyazopnaatvozzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395480.358452-305-11360236076681/AnsiballZ_group.py'
Dec 10 19:38:00 compute-0 sudo[112610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:38:00 compute-0 python3.9[112612]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 10 19:38:00 compute-0 groupadd[112613]: group added to /etc/group: name=libvirt, GID=42473
Dec 10 19:38:00 compute-0 groupadd[112613]: group added to /etc/gshadow: name=libvirt
Dec 10 19:38:01 compute-0 groupadd[112613]: new group: name=libvirt, GID=42473
Dec 10 19:38:01 compute-0 sudo[112610]: pam_unix(sudo:session): session closed for user root
Dec 10 19:38:01 compute-0 sudo[112768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmfhsivwkjkvktppmdbeeykntwyedmfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395481.2451456-313-280824316789277/AnsiballZ_user.py'
Dec 10 19:38:01 compute-0 sudo[112768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:38:02 compute-0 python3.9[112770]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 10 19:38:02 compute-0 useradd[112772]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 10 19:38:02 compute-0 sudo[112768]: pam_unix(sudo:session): session closed for user root
Dec 10 19:38:02 compute-0 sudo[112928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyjfufhwrjclfiltzjglmmcsgnotomle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395482.4405208-324-240484240313531/AnsiballZ_setup.py'
Dec 10 19:38:02 compute-0 sudo[112928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:38:03 compute-0 python3.9[112930]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:38:03 compute-0 sudo[112928]: pam_unix(sudo:session): session closed for user root
Dec 10 19:38:03 compute-0 sudo[113012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvywxvzepywvgsbxgbeensdvwkhbzacc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395482.4405208-324-240484240313531/AnsiballZ_dnf.py'
Dec 10 19:38:03 compute-0 sudo[113012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:38:03 compute-0 python3.9[113014]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:38:22 compute-0 podman[113205]: 2025-12-10 19:38:22.094751896 +0000 UTC m=+0.068702068 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:38:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:38:23.345 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:38:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:38:23.346 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:38:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:38:23.347 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:38:30 compute-0 podman[113224]: 2025-12-10 19:38:30.107322513 +0000 UTC m=+0.095471842 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 10 19:38:31 compute-0 kernel: SELinux:  Converting 2759 SID table entries...
Dec 10 19:38:31 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 10 19:38:31 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 10 19:38:31 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 10 19:38:31 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 10 19:38:31 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 10 19:38:31 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 10 19:38:31 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 10 19:38:41 compute-0 kernel: SELinux:  Converting 2759 SID table entries...
Dec 10 19:38:41 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 10 19:38:41 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 10 19:38:41 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 10 19:38:41 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 10 19:38:41 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 10 19:38:41 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 10 19:38:41 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 10 19:38:53 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 10 19:38:53 compute-0 podman[113265]: 2025-12-10 19:38:53.148483176 +0000 UTC m=+0.085920996 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:39:01 compute-0 podman[117484]: 2025-12-10 19:39:01.166389172 +0000 UTC m=+0.144913645 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 19:39:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:39:23.346 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:39:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:39:23.347 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:39:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:39:23.347 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:39:23 compute-0 podman[129793]: 2025-12-10 19:39:23.428847216 +0000 UTC m=+0.057754975 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 10 19:39:32 compute-0 podman[130134]: 2025-12-10 19:39:32.116424674 +0000 UTC m=+0.095546612 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:39:38 compute-0 kernel: SELinux:  Converting 2760 SID table entries...
Dec 10 19:39:38 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 10 19:39:38 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 10 19:39:38 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 10 19:39:38 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 10 19:39:38 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 10 19:39:38 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 10 19:39:38 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 10 19:39:39 compute-0 groupadd[130170]: group added to /etc/group: name=dnsmasq, GID=992
Dec 10 19:39:39 compute-0 groupadd[130170]: group added to /etc/gshadow: name=dnsmasq
Dec 10 19:39:39 compute-0 groupadd[130170]: new group: name=dnsmasq, GID=992
Dec 10 19:39:39 compute-0 useradd[130177]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 10 19:39:39 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec 10 19:39:39 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 10 19:39:40 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec 10 19:39:40 compute-0 groupadd[130190]: group added to /etc/group: name=clevis, GID=991
Dec 10 19:39:40 compute-0 groupadd[130190]: group added to /etc/gshadow: name=clevis
Dec 10 19:39:40 compute-0 groupadd[130190]: new group: name=clevis, GID=991
Dec 10 19:39:41 compute-0 useradd[130197]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 10 19:39:41 compute-0 usermod[130207]: add 'clevis' to group 'tss'
Dec 10 19:39:41 compute-0 usermod[130207]: add 'clevis' to shadow group 'tss'
Dec 10 19:39:43 compute-0 polkitd[43571]: Reloading rules
Dec 10 19:39:43 compute-0 polkitd[43571]: Collecting garbage unconditionally...
Dec 10 19:39:43 compute-0 polkitd[43571]: Loading rules from directory /etc/polkit-1/rules.d
Dec 10 19:39:43 compute-0 polkitd[43571]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 10 19:39:43 compute-0 polkitd[43571]: Finished loading, compiling and executing 3 rules
Dec 10 19:39:43 compute-0 polkitd[43571]: Reloading rules
Dec 10 19:39:43 compute-0 polkitd[43571]: Collecting garbage unconditionally...
Dec 10 19:39:43 compute-0 polkitd[43571]: Loading rules from directory /etc/polkit-1/rules.d
Dec 10 19:39:43 compute-0 polkitd[43571]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 10 19:39:43 compute-0 polkitd[43571]: Finished loading, compiling and executing 3 rules
Dec 10 19:39:44 compute-0 groupadd[130394]: group added to /etc/group: name=ceph, GID=167
Dec 10 19:39:44 compute-0 groupadd[130394]: group added to /etc/gshadow: name=ceph
Dec 10 19:39:44 compute-0 groupadd[130394]: new group: name=ceph, GID=167
Dec 10 19:39:44 compute-0 useradd[130400]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Dec 10 19:39:47 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Dec 10 19:39:47 compute-0 sshd[1004]: Received signal 15; terminating.
Dec 10 19:39:47 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Dec 10 19:39:47 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Dec 10 19:39:47 compute-0 systemd[1]: sshd.service: Consumed 1.470s CPU time, read 32.0K from disk, written 0B to disk.
Dec 10 19:39:47 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Dec 10 19:39:47 compute-0 systemd[1]: Stopping sshd-keygen.target...
Dec 10 19:39:47 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 10 19:39:47 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 10 19:39:47 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 10 19:39:47 compute-0 systemd[1]: Reached target sshd-keygen.target.
Dec 10 19:39:47 compute-0 systemd[1]: Starting OpenSSH server daemon...
Dec 10 19:39:47 compute-0 sshd[130919]: Server listening on 0.0.0.0 port 22.
Dec 10 19:39:47 compute-0 sshd[130919]: Server listening on :: port 22.
Dec 10 19:39:47 compute-0 systemd[1]: Started OpenSSH server daemon.
Dec 10 19:39:49 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 10 19:39:49 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 10 19:39:50 compute-0 systemd[1]: Reloading.
Dec 10 19:39:50 compute-0 systemd-rc-local-generator[131176]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:39:50 compute-0 systemd-sysv-generator[131181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:39:50 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 10 19:39:53 compute-0 sudo[113012]: pam_unix(sudo:session): session closed for user root
Dec 10 19:39:54 compute-0 podman[135417]: 2025-12-10 19:39:54.075450945 +0000 UTC m=+0.058692664 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 10 19:39:54 compute-0 sudo[135651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irxttxogtsjtdwvyloltwepbkbvfcdhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395593.5553355-336-71083738458879/AnsiballZ_systemd.py'
Dec 10 19:39:54 compute-0 sudo[135651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:39:54 compute-0 python3.9[135681]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 10 19:39:54 compute-0 systemd[1]: Reloading.
Dec 10 19:39:54 compute-0 systemd-rc-local-generator[136074]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:39:54 compute-0 systemd-sysv-generator[136081]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:39:54 compute-0 sudo[135651]: pam_unix(sudo:session): session closed for user root
Dec 10 19:39:55 compute-0 sudo[136846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrnzvmcbifxiswvryogzcmdmvqddjnas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395594.971321-336-36542465035153/AnsiballZ_systemd.py'
Dec 10 19:39:55 compute-0 sudo[136846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:39:55 compute-0 python3.9[136868]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 10 19:39:55 compute-0 systemd[1]: Reloading.
Dec 10 19:39:55 compute-0 systemd-rc-local-generator[137315]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:39:55 compute-0 systemd-sysv-generator[137320]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:39:55 compute-0 sudo[136846]: pam_unix(sudo:session): session closed for user root
Dec 10 19:39:56 compute-0 sudo[138016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxosdhntnjluchqmslzzlnsqhrfcdhtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395595.9552693-336-211176719126561/AnsiballZ_systemd.py'
Dec 10 19:39:56 compute-0 sudo[138016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:39:56 compute-0 python3.9[138039]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 10 19:39:56 compute-0 systemd[1]: Reloading.
Dec 10 19:39:56 compute-0 systemd-rc-local-generator[138489]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:39:56 compute-0 systemd-sysv-generator[138493]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:39:56 compute-0 sudo[138016]: pam_unix(sudo:session): session closed for user root
Dec 10 19:39:57 compute-0 sudo[139267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffbptejemfuavkpqjixzikyhqzbzelhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395596.9178548-336-269357990243351/AnsiballZ_systemd.py'
Dec 10 19:39:57 compute-0 sudo[139267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:39:57 compute-0 python3.9[139285]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 10 19:39:57 compute-0 systemd[1]: Reloading.
Dec 10 19:39:57 compute-0 systemd-rc-local-generator[139727]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:39:57 compute-0 systemd-sysv-generator[139731]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:39:57 compute-0 sudo[139267]: pam_unix(sudo:session): session closed for user root
Dec 10 19:39:58 compute-0 sudo[140406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cktfqourgpinftnomimfyvmclldplyet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395597.9649966-365-216303261292898/AnsiballZ_systemd.py'
Dec 10 19:39:58 compute-0 sudo[140406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:39:58 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 10 19:39:58 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 10 19:39:58 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.722s CPU time.
Dec 10 19:39:58 compute-0 systemd[1]: run-r2218694fb01f49958cf3aec1e695ce2e.service: Deactivated successfully.
Dec 10 19:39:58 compute-0 python3.9[140425]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:39:58 compute-0 systemd[1]: Reloading.
Dec 10 19:39:58 compute-0 systemd-rc-local-generator[140523]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:39:58 compute-0 systemd-sysv-generator[140527]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:39:58 compute-0 sudo[140406]: pam_unix(sudo:session): session closed for user root
Dec 10 19:39:59 compute-0 sudo[140681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxhblvqzuuwvsspykwxaxxaklbtnjutv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395599.0595014-365-275559480856236/AnsiballZ_systemd.py'
Dec 10 19:39:59 compute-0 sudo[140681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:39:59 compute-0 python3.9[140683]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:39:59 compute-0 systemd[1]: Reloading.
Dec 10 19:39:59 compute-0 systemd-rc-local-generator[140714]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:39:59 compute-0 systemd-sysv-generator[140717]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:39:59 compute-0 sudo[140681]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:00 compute-0 sudo[140871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyadzpzwcfwsvslbwdwtjpfqbewjbgnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395600.1081398-365-39080610205612/AnsiballZ_systemd.py'
Dec 10 19:40:00 compute-0 sudo[140871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:00 compute-0 python3.9[140873]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:00 compute-0 systemd[1]: Reloading.
Dec 10 19:40:00 compute-0 systemd-rc-local-generator[140906]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:40:00 compute-0 systemd-sysv-generator[140910]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:40:00 compute-0 sudo[140871]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:01 compute-0 sudo[141061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tllwpqzuuxwwvewogrrdummvrknuhuff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395601.125863-365-70851051404349/AnsiballZ_systemd.py'
Dec 10 19:40:01 compute-0 sudo[141061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:01 compute-0 python3.9[141063]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:01 compute-0 sudo[141061]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:02 compute-0 sudo[141225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouibayxijwwwjuffqwcpfvgkojwkmcti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395601.9952536-365-212523140815860/AnsiballZ_systemd.py'
Dec 10 19:40:02 compute-0 sudo[141225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:02 compute-0 podman[141190]: 2025-12-10 19:40:02.404226112 +0000 UTC m=+0.114796074 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 10 19:40:02 compute-0 python3.9[141235]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:02 compute-0 systemd[1]: Reloading.
Dec 10 19:40:02 compute-0 systemd-rc-local-generator[141271]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:40:02 compute-0 systemd-sysv-generator[141275]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:40:03 compute-0 sudo[141225]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:03 compute-0 sudo[141430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufvzzbzfdmotlvlusmxmhhwhpmlvecnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395603.2250853-401-72395021132919/AnsiballZ_systemd.py'
Dec 10 19:40:03 compute-0 sudo[141430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:03 compute-0 python3.9[141432]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 10 19:40:04 compute-0 systemd[1]: Reloading.
Dec 10 19:40:04 compute-0 systemd-rc-local-generator[141463]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:40:04 compute-0 systemd-sysv-generator[141467]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:40:04 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 10 19:40:04 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 10 19:40:04 compute-0 sudo[141430]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:04 compute-0 sudo[141623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmyqkqlwhfdyvzydvaqigxcymgqunckl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395604.5549545-409-71111022269880/AnsiballZ_systemd.py'
Dec 10 19:40:04 compute-0 sudo[141623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:05 compute-0 python3.9[141625]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:05 compute-0 sudo[141623]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:05 compute-0 sudo[141778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mykrkwpbhcrxyjpbaiwniptsjccqbytx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395605.525847-409-59111869551305/AnsiballZ_systemd.py'
Dec 10 19:40:05 compute-0 sudo[141778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:06 compute-0 python3.9[141780]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:06 compute-0 sudo[141778]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:06 compute-0 sudo[141933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujvwzfqbclciuopbhfqlocvrwnofagpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395606.3341415-409-25506120217637/AnsiballZ_systemd.py'
Dec 10 19:40:06 compute-0 sudo[141933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:06 compute-0 python3.9[141935]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:06 compute-0 sudo[141933]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:07 compute-0 sudo[142088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixssxgiuszqpnadthgsdzklgoqysrlgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395607.08696-409-167420043173121/AnsiballZ_systemd.py'
Dec 10 19:40:07 compute-0 sudo[142088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:07 compute-0 python3.9[142090]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:07 compute-0 sudo[142088]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:08 compute-0 sudo[142243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oydhizyelotyzuhqlpmgsmihrymosojm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395608.0128496-409-6072164747/AnsiballZ_systemd.py'
Dec 10 19:40:08 compute-0 sudo[142243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:08 compute-0 python3.9[142245]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:08 compute-0 sudo[142243]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:09 compute-0 sudo[142398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrdxxdkmnmezslgwcolvtewgpobacaqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395608.7781374-409-147511925894220/AnsiballZ_systemd.py'
Dec 10 19:40:09 compute-0 sudo[142398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:09 compute-0 python3.9[142400]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:09 compute-0 sudo[142398]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:10 compute-0 sudo[142553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnasdeoyogkaaeztvaglvvnvxuftxscm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395609.6992862-409-225812520738252/AnsiballZ_systemd.py'
Dec 10 19:40:10 compute-0 sudo[142553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:10 compute-0 python3.9[142555]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:10 compute-0 sudo[142553]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:11 compute-0 sudo[142708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huevdtsntzqfwgtemwjqzskepsgkdycm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395610.632359-409-18413742345742/AnsiballZ_systemd.py'
Dec 10 19:40:11 compute-0 sudo[142708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:11 compute-0 python3.9[142710]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:12 compute-0 sudo[142708]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:12 compute-0 sudo[142863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymcluengbtdewtjbwouapvybtihxjqlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395612.6269295-409-252395162012474/AnsiballZ_systemd.py'
Dec 10 19:40:12 compute-0 sudo[142863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:13 compute-0 python3.9[142865]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:13 compute-0 sudo[142863]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:13 compute-0 sudo[143018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdumycqpsqxjexxqxlkpyxavuvjqfivo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395613.5404475-409-159363454236240/AnsiballZ_systemd.py'
Dec 10 19:40:13 compute-0 sudo[143018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:14 compute-0 python3.9[143020]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:14 compute-0 sudo[143018]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:14 compute-0 sudo[143173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svndfzckkoohcwgudrmbsrhyxkjtsyea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395614.4357522-409-252401817361472/AnsiballZ_systemd.py'
Dec 10 19:40:14 compute-0 sudo[143173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:15 compute-0 python3.9[143175]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:15 compute-0 sudo[143173]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:15 compute-0 sudo[143328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktjdvsrrjuosptxwntlnbespqgjwcisb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395615.2600932-409-49244252768238/AnsiballZ_systemd.py'
Dec 10 19:40:15 compute-0 sudo[143328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:15 compute-0 python3.9[143330]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:16 compute-0 sudo[143328]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:16 compute-0 sudo[143483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cosdkmknmuuerqatmjhymwnirzhhryih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395616.1932576-409-166528009539776/AnsiballZ_systemd.py'
Dec 10 19:40:16 compute-0 sudo[143483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:16 compute-0 python3.9[143485]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:16 compute-0 sudo[143483]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:17 compute-0 sudo[143638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brxjaryebmxlttfmlvfthkbmmgdblxmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395617.009785-409-278147214264035/AnsiballZ_systemd.py'
Dec 10 19:40:17 compute-0 sudo[143638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:17 compute-0 python3.9[143640]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 10 19:40:17 compute-0 sudo[143638]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:18 compute-0 sudo[143793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuuekzqrrkexilaumoeqaykpgokqjugd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395618.0364475-511-275213166768944/AnsiballZ_file.py'
Dec 10 19:40:18 compute-0 sudo[143793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:18 compute-0 python3.9[143795]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:40:18 compute-0 sudo[143793]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:18 compute-0 sudo[143945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fatkiprinjxrvrzamwgfdyavlsdhzavs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395618.6391747-511-55622286070407/AnsiballZ_file.py'
Dec 10 19:40:18 compute-0 sudo[143945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:19 compute-0 python3.9[143947]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:40:19 compute-0 sudo[143945]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:19 compute-0 sudo[144097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nalsnvnkgmasypvlcemgqwuyvlaiqxhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395619.1903913-511-173714332726467/AnsiballZ_file.py'
Dec 10 19:40:19 compute-0 sudo[144097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:19 compute-0 python3.9[144099]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:40:19 compute-0 sudo[144097]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:20 compute-0 sudo[144249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adcwqbyridpybohqmkpwesrhxdyeejjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395619.799355-511-218243226031300/AnsiballZ_file.py'
Dec 10 19:40:20 compute-0 sudo[144249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:20 compute-0 python3.9[144251]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:40:20 compute-0 sudo[144249]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:20 compute-0 sudo[144401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgabdmcfhoauzqexlyikjbxpbxqlbenn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395620.477918-511-41316900548191/AnsiballZ_file.py'
Dec 10 19:40:20 compute-0 sudo[144401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:20 compute-0 python3.9[144403]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:40:20 compute-0 sudo[144401]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:21 compute-0 sudo[144553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwrzaoauenfkmyuvuesvykmhixppfkra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395621.069426-511-32109044101082/AnsiballZ_file.py'
Dec 10 19:40:21 compute-0 sudo[144553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:21 compute-0 python3.9[144555]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:40:21 compute-0 sudo[144553]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:22 compute-0 sudo[144705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjevazkgnuhpcjyybterguuteggrcrqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395621.7459054-554-268351636920199/AnsiballZ_stat.py'
Dec 10 19:40:22 compute-0 sudo[144705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:22 compute-0 python3.9[144707]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:22 compute-0 sudo[144705]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:22 compute-0 sudo[144830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zroqhlbfckgsmhddyvmwuymybwwzcfnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395621.7459054-554-268351636920199/AnsiballZ_copy.py'
Dec 10 19:40:22 compute-0 sudo[144830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:23 compute-0 python3.9[144832]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765395621.7459054-554-268351636920199/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:23 compute-0 sudo[144830]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:40:23.348 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:40:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:40:23.350 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:40:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:40:23.350 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:40:23 compute-0 sudo[144982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqippqxhoqrhjmxzhjdrkogoiftvkqpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395623.3114686-554-68838600105843/AnsiballZ_stat.py'
Dec 10 19:40:23 compute-0 sudo[144982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:23 compute-0 python3.9[144984]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:24 compute-0 sudo[144982]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:24 compute-0 sudo[145120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xylesuznziaqtxxeelmlwoqmljdfwirl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395623.3114686-554-68838600105843/AnsiballZ_copy.py'
Dec 10 19:40:24 compute-0 sudo[145120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:24 compute-0 podman[145081]: 2025-12-10 19:40:24.438957476 +0000 UTC m=+0.086659666 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:40:24 compute-0 python3.9[145126]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765395623.3114686-554-68838600105843/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:24 compute-0 sudo[145120]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:25 compute-0 sudo[145279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htcxqxhozwhemgqlmyyqjfsyvyymedgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395624.7714207-554-118335551286992/AnsiballZ_stat.py'
Dec 10 19:40:25 compute-0 sudo[145279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:25 compute-0 python3.9[145281]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:25 compute-0 sudo[145279]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:25 compute-0 sudo[145404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flgxppewknomdcntakhranykfczgrgvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395624.7714207-554-118335551286992/AnsiballZ_copy.py'
Dec 10 19:40:25 compute-0 sudo[145404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:25 compute-0 python3.9[145406]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765395624.7714207-554-118335551286992/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:25 compute-0 sudo[145404]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:26 compute-0 sudo[145556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydorvvgxfbglywworsmjnwrzrexnfdct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395626.1035235-554-142465819613184/AnsiballZ_stat.py'
Dec 10 19:40:26 compute-0 sudo[145556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:26 compute-0 python3.9[145558]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:26 compute-0 sudo[145556]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:26 compute-0 sudo[145681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clcwmcrudomubqmrrwgthnfepiiyyaix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395626.1035235-554-142465819613184/AnsiballZ_copy.py'
Dec 10 19:40:26 compute-0 sudo[145681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:27 compute-0 python3.9[145683]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765395626.1035235-554-142465819613184/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:27 compute-0 sudo[145681]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:27 compute-0 sudo[145833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdzxtcjkjuekuiofukpztnsjdblvdhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395627.3372095-554-105354484559628/AnsiballZ_stat.py'
Dec 10 19:40:27 compute-0 sudo[145833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:27 compute-0 python3.9[145835]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:27 compute-0 sudo[145833]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:28 compute-0 sudo[145958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpiydepsfxriufukuhejujlzhlqitiji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395627.3372095-554-105354484559628/AnsiballZ_copy.py'
Dec 10 19:40:28 compute-0 sudo[145958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:28 compute-0 python3.9[145960]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765395627.3372095-554-105354484559628/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:28 compute-0 sudo[145958]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:29 compute-0 sudo[146110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njibgbfhicuvnovnxgzqxaxcmtiojxkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395628.7405725-554-156647662912862/AnsiballZ_stat.py'
Dec 10 19:40:29 compute-0 sudo[146110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:29 compute-0 python3.9[146112]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:29 compute-0 sudo[146110]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:29 compute-0 sudo[146235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phersduyyzukiokhnkpkhiyhkiintxwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395628.7405725-554-156647662912862/AnsiballZ_copy.py'
Dec 10 19:40:29 compute-0 sudo[146235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:29 compute-0 python3.9[146237]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765395628.7405725-554-156647662912862/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:29 compute-0 sudo[146235]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:30 compute-0 sudo[146387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bemfqfbrcgvrumrjsebwybyehiyjhgov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395630.1509955-554-47185962690132/AnsiballZ_stat.py'
Dec 10 19:40:30 compute-0 sudo[146387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:30 compute-0 python3.9[146389]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:30 compute-0 sudo[146387]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:31 compute-0 sudo[146510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vurxsubomlhmoqtkzhzykrdwskwkmxgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395630.1509955-554-47185962690132/AnsiballZ_copy.py'
Dec 10 19:40:31 compute-0 sudo[146510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:31 compute-0 python3.9[146512]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765395630.1509955-554-47185962690132/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:31 compute-0 sudo[146510]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:31 compute-0 sudo[146662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elngbnphjsjbyiojrakiszdlwwfuuxsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395631.5101433-554-200752227602791/AnsiballZ_stat.py'
Dec 10 19:40:31 compute-0 sudo[146662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:31 compute-0 python3.9[146664]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:32 compute-0 sudo[146662]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:32 compute-0 sudo[146787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhytzcpzyghmjrgadalzljdlookpowha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395631.5101433-554-200752227602791/AnsiballZ_copy.py'
Dec 10 19:40:32 compute-0 sudo[146787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:32 compute-0 python3.9[146789]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765395631.5101433-554-200752227602791/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:32 compute-0 sudo[146787]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:32 compute-0 podman[146790]: 2025-12-10 19:40:32.680917384 +0000 UTC m=+0.100584541 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:40:32 compute-0 sudo[146965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihdjzqcwvjzfvmrytkqhavzsdeuokstu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395632.7535837-667-274881790071432/AnsiballZ_command.py'
Dec 10 19:40:32 compute-0 sudo[146965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:33 compute-0 python3.9[146967]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 10 19:40:33 compute-0 sudo[146965]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:33 compute-0 sudo[147118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocpwhcbapilytyzxhnocdruewhjfnbua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395633.4032776-676-160146343740096/AnsiballZ_file.py'
Dec 10 19:40:33 compute-0 sudo[147118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:33 compute-0 python3.9[147120]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:33 compute-0 sudo[147118]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:34 compute-0 sudo[147270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djkhhknidjkomopstpdseefipberlhep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395633.9747689-676-41894098705629/AnsiballZ_file.py'
Dec 10 19:40:34 compute-0 sudo[147270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:34 compute-0 python3.9[147272]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:34 compute-0 sudo[147270]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:34 compute-0 sudo[147422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rykowohcpdsgcbxdvdrxrvbcsunaubqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395634.6571288-676-113263913727866/AnsiballZ_file.py'
Dec 10 19:40:34 compute-0 sudo[147422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:35 compute-0 python3.9[147424]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:35 compute-0 sudo[147422]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:35 compute-0 sudo[147574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pajgtpsnhgqpzwcdgtvdufelbnjswzvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395635.2909567-676-213733648106271/AnsiballZ_file.py'
Dec 10 19:40:35 compute-0 sudo[147574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:35 compute-0 python3.9[147576]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:35 compute-0 sudo[147574]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:36 compute-0 sudo[147726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpondhhsfujgizevgisimcgcdahmzvbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395635.9271207-676-130683849043374/AnsiballZ_file.py'
Dec 10 19:40:36 compute-0 sudo[147726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:36 compute-0 python3.9[147728]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:36 compute-0 sudo[147726]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:36 compute-0 sudo[147878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zedsnbjlpadwnoyblifwrutnshmoekai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395636.6376512-676-36733190026432/AnsiballZ_file.py'
Dec 10 19:40:36 compute-0 sudo[147878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:37 compute-0 python3.9[147880]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:37 compute-0 sudo[147878]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:37 compute-0 sudo[148030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftthwepatormedjeqcbhwwdfekdbfzqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395637.2943625-676-60097604438607/AnsiballZ_file.py'
Dec 10 19:40:37 compute-0 sudo[148030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:37 compute-0 python3.9[148032]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:37 compute-0 sudo[148030]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:38 compute-0 sudo[148182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fexyolobfapjsxepvbdtmnyyhkporijj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395637.9258661-676-128738902443518/AnsiballZ_file.py'
Dec 10 19:40:38 compute-0 sudo[148182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:38 compute-0 python3.9[148184]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:38 compute-0 sudo[148182]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:38 compute-0 sudo[148334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nakoimsmyxlfihtgcjuvrnoxhislflss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395638.6122346-676-232499306965998/AnsiballZ_file.py'
Dec 10 19:40:38 compute-0 sudo[148334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:39 compute-0 python3.9[148336]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:39 compute-0 sudo[148334]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:39 compute-0 sudo[148486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgcyycgxuskhvpblemyxpacetjngdmnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395639.2903688-676-152854922062889/AnsiballZ_file.py'
Dec 10 19:40:39 compute-0 sudo[148486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:39 compute-0 python3.9[148488]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:39 compute-0 sudo[148486]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:40 compute-0 sudo[148638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnayeyonrreqoogqwvmqmjjfovpaxmnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395639.9452162-676-259629708226046/AnsiballZ_file.py'
Dec 10 19:40:40 compute-0 sudo[148638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:40 compute-0 python3.9[148640]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:40 compute-0 sudo[148638]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:40 compute-0 sudo[148790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krgvcuiwzgvhatxpqkxlarzmwexuzxys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395640.615511-676-201010759480138/AnsiballZ_file.py'
Dec 10 19:40:40 compute-0 sudo[148790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:41 compute-0 python3.9[148792]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:41 compute-0 sudo[148790]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:41 compute-0 sudo[148942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvumjxoscwuzihbxdahopqivlprihcbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395641.186497-676-138015401900623/AnsiballZ_file.py'
Dec 10 19:40:41 compute-0 sudo[148942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:41 compute-0 python3.9[148944]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:41 compute-0 sudo[148942]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:42 compute-0 sudo[149094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bakpkringpiizenhzeycydocvckfhxzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395641.825888-676-3830452593435/AnsiballZ_file.py'
Dec 10 19:40:42 compute-0 sudo[149094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:42 compute-0 python3.9[149096]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:42 compute-0 sudo[149094]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:42 compute-0 sudo[149246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stgfdxaeflntzpwktzfcygcsyjtnehbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395642.5133488-775-213249001713072/AnsiballZ_stat.py'
Dec 10 19:40:42 compute-0 sudo[149246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:43 compute-0 python3.9[149248]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:43 compute-0 sudo[149246]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:43 compute-0 sudo[149369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrigndwasglpszvxlyhutjdlgmjtvecj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395642.5133488-775-213249001713072/AnsiballZ_copy.py'
Dec 10 19:40:43 compute-0 sudo[149369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:43 compute-0 python3.9[149371]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395642.5133488-775-213249001713072/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:43 compute-0 sudo[149369]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:44 compute-0 sudo[149521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmfiimrrcajcjsmhgmxkykabiacbbqgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395643.9028492-775-156299914028490/AnsiballZ_stat.py'
Dec 10 19:40:44 compute-0 sudo[149521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:44 compute-0 python3.9[149523]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:44 compute-0 sudo[149521]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:44 compute-0 sudo[149644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buppttrdonbcfinksqzfdhawcmvzabif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395643.9028492-775-156299914028490/AnsiballZ_copy.py'
Dec 10 19:40:44 compute-0 sudo[149644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:44 compute-0 python3.9[149646]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395643.9028492-775-156299914028490/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:45 compute-0 sudo[149644]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:45 compute-0 sudo[149796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dujebbmgirvekufeywatkofuvktlubjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395645.1763983-775-89078433460047/AnsiballZ_stat.py'
Dec 10 19:40:45 compute-0 sudo[149796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:45 compute-0 python3.9[149798]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:45 compute-0 sudo[149796]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:46 compute-0 sudo[149919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odrrpdqnslddcwtgaoviqpbvzccizvgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395645.1763983-775-89078433460047/AnsiballZ_copy.py'
Dec 10 19:40:46 compute-0 sudo[149919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:46 compute-0 python3.9[149921]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395645.1763983-775-89078433460047/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:46 compute-0 sudo[149919]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:46 compute-0 sudo[150071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckglinyauytqiecsanbkcqaiseoyguzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395646.3545613-775-8069883985258/AnsiballZ_stat.py'
Dec 10 19:40:46 compute-0 sudo[150071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:46 compute-0 python3.9[150073]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:46 compute-0 sudo[150071]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:47 compute-0 sudo[150194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pprubpsbijwtzsvnimydmumnswpcztpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395646.3545613-775-8069883985258/AnsiballZ_copy.py'
Dec 10 19:40:47 compute-0 sudo[150194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:47 compute-0 python3.9[150196]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395646.3545613-775-8069883985258/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:47 compute-0 sudo[150194]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:48 compute-0 sudo[150346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aikcezzgqlyqfhsgmjuofokmylxekscq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395647.825125-775-214146719489736/AnsiballZ_stat.py'
Dec 10 19:40:48 compute-0 sudo[150346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:48 compute-0 python3.9[150348]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:48 compute-0 sudo[150346]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:48 compute-0 sudo[150469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgpmulqojizpxowrbiqrkehrtjhdjveo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395647.825125-775-214146719489736/AnsiballZ_copy.py'
Dec 10 19:40:48 compute-0 sudo[150469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:48 compute-0 python3.9[150471]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395647.825125-775-214146719489736/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:48 compute-0 sudo[150469]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:49 compute-0 sudo[150621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhzqfhgmhjjqswnlcrhcdztqojbvqadw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395649.0687182-775-24438045250042/AnsiballZ_stat.py'
Dec 10 19:40:49 compute-0 sudo[150621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:49 compute-0 python3.9[150623]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:49 compute-0 sudo[150621]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:49 compute-0 sudo[150744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezvhszrkkdsardijbknqcszhwjjhbmol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395649.0687182-775-24438045250042/AnsiballZ_copy.py'
Dec 10 19:40:49 compute-0 sudo[150744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:50 compute-0 python3.9[150746]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395649.0687182-775-24438045250042/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:50 compute-0 sudo[150744]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:50 compute-0 sudo[150896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juzfmwdqjgandtcjddkitlitfthpaevp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395650.1882832-775-112368560713233/AnsiballZ_stat.py'
Dec 10 19:40:50 compute-0 sudo[150896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:50 compute-0 python3.9[150898]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:50 compute-0 sudo[150896]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:51 compute-0 sudo[151019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvdeeeticjzvrcyonvfgubemopshyvlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395650.1882832-775-112368560713233/AnsiballZ_copy.py'
Dec 10 19:40:51 compute-0 sudo[151019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:51 compute-0 python3.9[151021]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395650.1882832-775-112368560713233/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:51 compute-0 sudo[151019]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:51 compute-0 sudo[151171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfemhwxvncfbpdgkzskmwrjkwcpvhwva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395651.3769274-775-165281503822371/AnsiballZ_stat.py'
Dec 10 19:40:51 compute-0 sudo[151171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:51 compute-0 python3.9[151173]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:51 compute-0 sudo[151171]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:52 compute-0 sudo[151294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrzonccpttserzjszdwpvjnxvfzqylfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395651.3769274-775-165281503822371/AnsiballZ_copy.py'
Dec 10 19:40:52 compute-0 sudo[151294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:52 compute-0 python3.9[151296]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395651.3769274-775-165281503822371/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:52 compute-0 sudo[151294]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:53 compute-0 sudo[151446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sduusfrckprfilknwxraxiicvmbabkjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395652.825506-775-156775570060824/AnsiballZ_stat.py'
Dec 10 19:40:53 compute-0 sudo[151446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:53 compute-0 python3.9[151448]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:53 compute-0 sudo[151446]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:53 compute-0 sudo[151569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxintioxjyqxeejyqfeqkecfsupnbqep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395652.825506-775-156775570060824/AnsiballZ_copy.py'
Dec 10 19:40:53 compute-0 sudo[151569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:53 compute-0 python3.9[151571]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395652.825506-775-156775570060824/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:53 compute-0 sudo[151569]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:54 compute-0 sudo[151721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkaetgynzsiqecwvgrpathctxhuyujpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395653.9418745-775-6072230683842/AnsiballZ_stat.py'
Dec 10 19:40:54 compute-0 sudo[151721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:54 compute-0 python3.9[151723]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:54 compute-0 sudo[151721]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:54 compute-0 podman[151818]: 2025-12-10 19:40:54.78282296 +0000 UTC m=+0.061416892 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 10 19:40:54 compute-0 sudo[151860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prerdycvwehhbmndgmoxlfiuuogpnrmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395653.9418745-775-6072230683842/AnsiballZ_copy.py'
Dec 10 19:40:54 compute-0 sudo[151860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:54 compute-0 python3.9[151865]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395653.9418745-775-6072230683842/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:55 compute-0 sudo[151860]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:55 compute-0 sudo[152015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmmoumavrqmmroswfkzdslvlplxaadfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395655.1483145-775-52430781263932/AnsiballZ_stat.py'
Dec 10 19:40:55 compute-0 sudo[152015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:55 compute-0 python3.9[152017]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:55 compute-0 sudo[152015]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:55 compute-0 sudo[152138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rssvaioyeffagfmndmocljzrqtnghprh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395655.1483145-775-52430781263932/AnsiballZ_copy.py'
Dec 10 19:40:56 compute-0 sudo[152138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:56 compute-0 python3.9[152140]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395655.1483145-775-52430781263932/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:56 compute-0 sudo[152138]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:56 compute-0 sudo[152290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjwbbcnzljynxmvaeovsvuzzvigpjjgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395656.3465645-775-85640287845123/AnsiballZ_stat.py'
Dec 10 19:40:56 compute-0 sudo[152290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:56 compute-0 python3.9[152292]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:56 compute-0 sudo[152290]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:57 compute-0 sudo[152413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjtjhgawobbsdvimnsddlndllmsixlpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395656.3465645-775-85640287845123/AnsiballZ_copy.py'
Dec 10 19:40:57 compute-0 sudo[152413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:57 compute-0 python3.9[152415]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395656.3465645-775-85640287845123/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:57 compute-0 sudo[152413]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:58 compute-0 sudo[152565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynkxokjnrrjuobrulkpsfpnjrxyvjnwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395657.7143102-775-42655971811387/AnsiballZ_stat.py'
Dec 10 19:40:58 compute-0 sudo[152565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:58 compute-0 python3.9[152567]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:58 compute-0 sudo[152565]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:58 compute-0 sudo[152688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrpymxjwbiwqwakrxhakescayfaahbca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395657.7143102-775-42655971811387/AnsiballZ_copy.py'
Dec 10 19:40:58 compute-0 sudo[152688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:58 compute-0 python3.9[152690]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395657.7143102-775-42655971811387/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:40:58 compute-0 sudo[152688]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:59 compute-0 sudo[152840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adylnpesnluhsybrnjhkebmovjxqjpeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395658.91333-775-123847784973946/AnsiballZ_stat.py'
Dec 10 19:40:59 compute-0 sudo[152840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:59 compute-0 python3.9[152842]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:40:59 compute-0 sudo[152840]: pam_unix(sudo:session): session closed for user root
Dec 10 19:40:59 compute-0 sudo[152963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvmzilwmacnzuoikhagolzksqmdkhfqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395658.91333-775-123847784973946/AnsiballZ_copy.py'
Dec 10 19:40:59 compute-0 sudo[152963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:40:59 compute-0 python3.9[152965]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395658.91333-775-123847784973946/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:00 compute-0 sudo[152963]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:00 compute-0 python3.9[153115]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:41:01 compute-0 sudo[153268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrhkprgekkkeoipygajuhtqbnwyffyzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395660.9483497-981-72275488782742/AnsiballZ_seboolean.py'
Dec 10 19:41:01 compute-0 sudo[153268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:01 compute-0 python3.9[153270]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 10 19:41:02 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 10 19:41:03 compute-0 sudo[153268]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:03 compute-0 podman[153275]: 2025-12-10 19:41:03.125658857 +0000 UTC m=+0.104889386 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 10 19:41:03 compute-0 sudo[153450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bprcnkrqnplxtojxijlzqghweaqftgru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395663.2369974-989-136599632401327/AnsiballZ_copy.py'
Dec 10 19:41:03 compute-0 sudo[153450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:03 compute-0 python3.9[153452]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:03 compute-0 sudo[153450]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:04 compute-0 sudo[153602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfrqfoulsrozodzgkhhjemqmgigzlaji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395663.9362893-989-99665249208364/AnsiballZ_copy.py'
Dec 10 19:41:04 compute-0 sudo[153602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:04 compute-0 python3.9[153604]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:04 compute-0 sudo[153602]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:05 compute-0 sudo[153754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzagherkvhtbsabchbtdndbzkhcllkmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395664.641084-989-185382545968721/AnsiballZ_copy.py'
Dec 10 19:41:05 compute-0 sudo[153754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:06 compute-0 python3.9[153756]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:06 compute-0 sudo[153754]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:06 compute-0 sudo[153906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqiuddmzleqejvdcptfnbrwxjpekrxym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395666.2277598-989-133695052068693/AnsiballZ_copy.py'
Dec 10 19:41:06 compute-0 sudo[153906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:06 compute-0 python3.9[153908]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:06 compute-0 sudo[153906]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:07 compute-0 sudo[154058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrxchsfudsubzcpabhpholvmyjbrkwqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395666.954266-989-86758446874763/AnsiballZ_copy.py'
Dec 10 19:41:07 compute-0 sudo[154058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:07 compute-0 python3.9[154060]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:07 compute-0 sudo[154058]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:07 compute-0 sudo[154210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iecjcnnpwhreudabzbofpkpdmzysmdig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395667.6159055-1025-241916064604494/AnsiballZ_copy.py'
Dec 10 19:41:07 compute-0 sudo[154210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:08 compute-0 python3.9[154212]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:08 compute-0 sudo[154210]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:08 compute-0 sudo[154362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abhafnbspiadsdhazubyiqcnuobdofoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395668.1958823-1025-111388672055159/AnsiballZ_copy.py'
Dec 10 19:41:08 compute-0 sudo[154362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:08 compute-0 python3.9[154364]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:08 compute-0 sudo[154362]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:09 compute-0 sudo[154514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gocpnyaurkurkxdywolyhvyoondpvghp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395668.775922-1025-154978027975604/AnsiballZ_copy.py'
Dec 10 19:41:09 compute-0 sudo[154514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:09 compute-0 python3.9[154516]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:09 compute-0 sudo[154514]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:09 compute-0 sudo[154666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzazhhrdvqsnvstrzfzdpeihclxcrauw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395669.3975334-1025-173770986474489/AnsiballZ_copy.py'
Dec 10 19:41:09 compute-0 sudo[154666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:09 compute-0 python3.9[154668]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:09 compute-0 sudo[154666]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:10 compute-0 sudo[154818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peyiujtpdyszgyefgfbddxlwsuerwzcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395670.1173627-1025-51554697322612/AnsiballZ_copy.py'
Dec 10 19:41:10 compute-0 sudo[154818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:10 compute-0 python3.9[154820]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:10 compute-0 sudo[154818]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:11 compute-0 sudo[154970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjjtzbwzoenpoetfyiifjhwvxciplala ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395670.7471075-1061-134127852075055/AnsiballZ_systemd.py'
Dec 10 19:41:11 compute-0 sudo[154970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:11 compute-0 python3.9[154972]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:41:11 compute-0 systemd[1]: Reloading.
Dec 10 19:41:11 compute-0 systemd-sysv-generator[155004]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:41:11 compute-0 systemd-rc-local-generator[155000]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:41:11 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Dec 10 19:41:11 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Dec 10 19:41:11 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 10 19:41:11 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 10 19:41:11 compute-0 systemd[1]: Starting libvirt logging daemon...
Dec 10 19:41:11 compute-0 systemd[1]: Started libvirt logging daemon.
Dec 10 19:41:11 compute-0 sudo[154970]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:12 compute-0 sudo[155164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsrmeakwiibduubcbbgooopydeenxjfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395671.8295991-1061-181144945558797/AnsiballZ_systemd.py'
Dec 10 19:41:12 compute-0 sudo[155164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:12 compute-0 python3.9[155166]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:41:12 compute-0 systemd[1]: Reloading.
Dec 10 19:41:12 compute-0 systemd-rc-local-generator[155191]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:41:12 compute-0 systemd-sysv-generator[155196]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:41:12 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 10 19:41:12 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 10 19:41:12 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 10 19:41:12 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 10 19:41:12 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 10 19:41:12 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 10 19:41:12 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 10 19:41:12 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 10 19:41:12 compute-0 sudo[155164]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:13 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 10 19:41:13 compute-0 sudo[155381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqjcoddysrobfjxcdglufmytunncjtqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395672.856218-1061-61572295532783/AnsiballZ_systemd.py'
Dec 10 19:41:13 compute-0 sudo[155381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:13 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 10 19:41:13 compute-0 python3.9[155383]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:41:13 compute-0 systemd[1]: Reloading.
Dec 10 19:41:13 compute-0 systemd-rc-local-generator[155414]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:41:13 compute-0 systemd-sysv-generator[155419]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:41:13 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 10 19:41:13 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 10 19:41:13 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 10 19:41:13 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 10 19:41:13 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 10 19:41:13 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 10 19:41:13 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 10 19:41:13 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 10 19:41:13 compute-0 sudo[155381]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:14 compute-0 sudo[155600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nisamlgfyaiejzdvuenolknsnyeyhqra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395674.0328743-1061-133822909165/AnsiballZ_systemd.py'
Dec 10 19:41:14 compute-0 sudo[155600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:14 compute-0 setroubleshoot[155330]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 1ac1d67b-4a27-4945-91ed-34560633aac5
Dec 10 19:41:14 compute-0 setroubleshoot[155330]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 10 19:41:14 compute-0 setroubleshoot[155330]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 1ac1d67b-4a27-4945-91ed-34560633aac5
Dec 10 19:41:14 compute-0 python3.9[155602]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:41:14 compute-0 setroubleshoot[155330]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 10 19:41:14 compute-0 systemd[1]: Reloading.
Dec 10 19:41:14 compute-0 systemd-sysv-generator[155633]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:41:14 compute-0 systemd-rc-local-generator[155630]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:41:14 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Dec 10 19:41:14 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 10 19:41:14 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 10 19:41:15 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 10 19:41:15 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 10 19:41:15 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 10 19:41:15 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 10 19:41:15 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 10 19:41:15 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 10 19:41:15 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 10 19:41:15 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 10 19:41:15 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 10 19:41:15 compute-0 sudo[155600]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:15 compute-0 sudo[155815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpsqszhwzwjnfeintvazhlaxwwxvutmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395675.240965-1061-207150823091600/AnsiballZ_systemd.py'
Dec 10 19:41:15 compute-0 sudo[155815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:15 compute-0 python3.9[155817]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:41:15 compute-0 systemd[1]: Reloading.
Dec 10 19:41:16 compute-0 systemd-rc-local-generator[155845]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:41:16 compute-0 systemd-sysv-generator[155849]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:41:16 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Dec 10 19:41:16 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Dec 10 19:41:16 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 10 19:41:16 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 10 19:41:16 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 10 19:41:16 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 10 19:41:16 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 10 19:41:16 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 10 19:41:16 compute-0 sudo[155815]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:16 compute-0 sudo[156027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhgsvceaipdkztnpsojooeouwcvvjupe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395676.616145-1098-5416360000645/AnsiballZ_file.py'
Dec 10 19:41:16 compute-0 sudo[156027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:17 compute-0 python3.9[156029]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:17 compute-0 sudo[156027]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:17 compute-0 sudo[156179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qijnmcjsvysgivkzyqcaioeqqandxhxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395677.3889823-1106-216007078336923/AnsiballZ_find.py'
Dec 10 19:41:17 compute-0 sudo[156179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:17 compute-0 python3.9[156181]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 10 19:41:17 compute-0 sudo[156179]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:18 compute-0 sudo[156331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbgnbfxyrfjwdufgjkvxqzqmyadfufid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395678.2655175-1120-28744553537964/AnsiballZ_stat.py'
Dec 10 19:41:18 compute-0 sudo[156331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:18 compute-0 python3.9[156333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:41:18 compute-0 sudo[156331]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:19 compute-0 sudo[156454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyrhtvfexfcgklljleawmpmvisjayyuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395678.2655175-1120-28744553537964/AnsiballZ_copy.py'
Dec 10 19:41:19 compute-0 sudo[156454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:19 compute-0 python3.9[156456]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395678.2655175-1120-28744553537964/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:19 compute-0 sudo[156454]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:20 compute-0 sudo[156606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kikyihdixaxtvzvxqmyoxurtxfghdrjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395679.7565198-1136-213148392275351/AnsiballZ_file.py'
Dec 10 19:41:20 compute-0 sudo[156606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:20 compute-0 python3.9[156608]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:20 compute-0 sudo[156606]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:20 compute-0 sudo[156758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqnetefqwzcfwkqmbpjrltasefqnyodq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395680.4658244-1144-276729465807865/AnsiballZ_stat.py'
Dec 10 19:41:20 compute-0 sudo[156758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:20 compute-0 python3.9[156760]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:41:21 compute-0 sudo[156758]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:21 compute-0 sudo[156836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpiqaontsasjbaueikunqxzqacifiyvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395680.4658244-1144-276729465807865/AnsiballZ_file.py'
Dec 10 19:41:21 compute-0 sudo[156836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:21 compute-0 python3.9[156838]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:21 compute-0 sudo[156836]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:21 compute-0 sudo[156988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wynlmwsujcqlkbadbfhlbephnqwfnphs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395681.6252992-1156-161749622049701/AnsiballZ_stat.py'
Dec 10 19:41:21 compute-0 sudo[156988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:22 compute-0 python3.9[156990]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:41:22 compute-0 sudo[156988]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:22 compute-0 sudo[157066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpgxehbmzmyaczhwmzvlnphlhzbjnynn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395681.6252992-1156-161749622049701/AnsiballZ_file.py'
Dec 10 19:41:22 compute-0 sudo[157066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:22 compute-0 python3.9[157068]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.w9pcq1jx recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:22 compute-0 sudo[157066]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:23 compute-0 sudo[157218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxyqbdtoinrrirzetzrdbizbpxbtwswx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395682.9353569-1168-109023169385516/AnsiballZ_stat.py'
Dec 10 19:41:23 compute-0 sudo[157218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:41:23.349 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:41:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:41:23.351 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:41:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:41:23.351 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:41:23 compute-0 python3.9[157220]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:41:23 compute-0 sudo[157218]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:23 compute-0 sudo[157296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqeubsqrmudycrmskjvvbjxwubplxlmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395682.9353569-1168-109023169385516/AnsiballZ_file.py'
Dec 10 19:41:23 compute-0 sudo[157296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:23 compute-0 python3.9[157298]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:23 compute-0 sudo[157296]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:24 compute-0 sudo[157448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnbelxunikcwexdanuccsrmrcgwrlfsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395684.2882712-1181-148980397490321/AnsiballZ_command.py'
Dec 10 19:41:24 compute-0 sudo[157448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:24 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 10 19:41:24 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 10 19:41:24 compute-0 python3.9[157450]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:41:24 compute-0 sudo[157448]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:25 compute-0 podman[157459]: 2025-12-10 19:41:25.141963835 +0000 UTC m=+0.105533685 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:41:25 compute-0 sudo[157621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukdwwdavxcxxwqpfhdeezlfhmmzsdowu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765395685.1788292-1189-281037112090306/AnsiballZ_edpm_nftables_from_files.py'
Dec 10 19:41:25 compute-0 sudo[157621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:25 compute-0 python3[157623]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 10 19:41:25 compute-0 sudo[157621]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:26 compute-0 sudo[157773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fviptcifhremqffphiiblxtjmywdngdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395686.2052677-1197-115747091361613/AnsiballZ_stat.py'
Dec 10 19:41:26 compute-0 sudo[157773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:26 compute-0 python3.9[157775]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:41:26 compute-0 sudo[157773]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:27 compute-0 sudo[157851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixfgpvwxrriboriwdhhnikrvrleppgae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395686.2052677-1197-115747091361613/AnsiballZ_file.py'
Dec 10 19:41:27 compute-0 sudo[157851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:27 compute-0 python3.9[157853]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:27 compute-0 sudo[157851]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:27 compute-0 sudo[158003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-staqtepmnbyqmctvzfgpffcgzyfydyti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395687.5583353-1209-33852944700952/AnsiballZ_stat.py'
Dec 10 19:41:27 compute-0 sudo[158003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:28 compute-0 python3.9[158005]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:41:28 compute-0 sudo[158003]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:28 compute-0 sudo[158081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jouqekaltrkzvddinwtlzznsywafcwnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395687.5583353-1209-33852944700952/AnsiballZ_file.py'
Dec 10 19:41:28 compute-0 sudo[158081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:28 compute-0 python3.9[158083]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:28 compute-0 sudo[158081]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:29 compute-0 sudo[158233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyygljytyfelyidbuxqldxaectfcpiss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395688.9535785-1221-179825834639199/AnsiballZ_stat.py'
Dec 10 19:41:29 compute-0 sudo[158233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:29 compute-0 python3.9[158235]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:41:29 compute-0 sudo[158233]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:29 compute-0 sudo[158311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vifydqhcrdrtbinfrosnmgtkwyyiqgrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395688.9535785-1221-179825834639199/AnsiballZ_file.py'
Dec 10 19:41:29 compute-0 sudo[158311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:29 compute-0 python3.9[158313]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:29 compute-0 sudo[158311]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:30 compute-0 sudo[158463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igtsqffiwxehsrmjjxeiiemyaqnwbkpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395690.098984-1233-165001355597694/AnsiballZ_stat.py'
Dec 10 19:41:30 compute-0 sudo[158463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:30 compute-0 python3.9[158465]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:41:30 compute-0 sudo[158463]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:30 compute-0 sudo[158541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwdrazoyoyilkakejjtaqzlkitjmctpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395690.098984-1233-165001355597694/AnsiballZ_file.py'
Dec 10 19:41:30 compute-0 sudo[158541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:31 compute-0 python3.9[158543]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:31 compute-0 sudo[158541]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:31 compute-0 sudo[158693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdtwumkaeqncgsmkuzjslfdvhopdamjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395691.2703755-1245-32818357813185/AnsiballZ_stat.py'
Dec 10 19:41:31 compute-0 sudo[158693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:31 compute-0 python3.9[158695]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:41:31 compute-0 sudo[158693]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:32 compute-0 sudo[158818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxnezfxynncgsholsoyspfuvegpbtfxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395691.2703755-1245-32818357813185/AnsiballZ_copy.py'
Dec 10 19:41:32 compute-0 sudo[158818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:32 compute-0 python3.9[158820]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765395691.2703755-1245-32818357813185/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:32 compute-0 sudo[158818]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:32 compute-0 sudo[158970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gezttjuwjctseqveryirsudfornycwmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395692.6895545-1260-81430978811810/AnsiballZ_file.py'
Dec 10 19:41:32 compute-0 sudo[158970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:33 compute-0 python3.9[158972]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:33 compute-0 sudo[158970]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:33 compute-0 sudo[159133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svrzdxmljdmzqaggtdlibqrqrerxmdii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395693.483371-1268-61956487065816/AnsiballZ_command.py'
Dec 10 19:41:33 compute-0 sudo[159133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:33 compute-0 podman[159096]: 2025-12-10 19:41:33.942996369 +0000 UTC m=+0.152806302 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Dec 10 19:41:34 compute-0 python3.9[159141]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:41:34 compute-0 sudo[159133]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:34 compute-0 sudo[159302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sybawrykzcetjzfghzngydezeisefhul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395694.2779458-1276-53615718284594/AnsiballZ_blockinfile.py'
Dec 10 19:41:34 compute-0 sudo[159302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:34 compute-0 python3.9[159304]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:34 compute-0 sudo[159302]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:35 compute-0 sudo[159454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofcaozmqmwizmonzkxbdcfavvezuhxwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395695.1791222-1285-29413506988643/AnsiballZ_command.py'
Dec 10 19:41:35 compute-0 sudo[159454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:35 compute-0 python3.9[159456]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:41:35 compute-0 sudo[159454]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:36 compute-0 sudo[159607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmochxkrsnwlocgkeunsshyuzmwthqcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395695.8573194-1293-162004546496887/AnsiballZ_stat.py'
Dec 10 19:41:36 compute-0 sudo[159607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:36 compute-0 python3.9[159609]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:41:36 compute-0 sudo[159607]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:36 compute-0 sudo[159761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npsuecmhpcdbdvhxdavxepkulknnpeky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395696.5119941-1301-175855297662035/AnsiballZ_command.py'
Dec 10 19:41:36 compute-0 sudo[159761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:37 compute-0 python3.9[159763]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:41:37 compute-0 sudo[159761]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:37 compute-0 sudo[159916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgaqltoipevlfcwxwxymebhgpolzwxyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395697.2616916-1309-10583824634003/AnsiballZ_file.py'
Dec 10 19:41:37 compute-0 sudo[159916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:37 compute-0 python3.9[159918]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:37 compute-0 sudo[159916]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:38 compute-0 sudo[160068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjscwohsinuufpazqvvbkqivyzksgqhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395697.96979-1317-53452073410920/AnsiballZ_stat.py'
Dec 10 19:41:38 compute-0 sudo[160068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:38 compute-0 python3.9[160070]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:41:38 compute-0 sudo[160068]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:38 compute-0 sudo[160191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsnfqgiztblipqvoqwvapedwcsfouxdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395697.96979-1317-53452073410920/AnsiballZ_copy.py'
Dec 10 19:41:38 compute-0 sudo[160191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:39 compute-0 python3.9[160193]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395697.96979-1317-53452073410920/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:39 compute-0 sudo[160191]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:39 compute-0 sudo[160343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icsfzjcplzwuiwejdvkqhuecpafaiuwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395699.358694-1332-217975846273572/AnsiballZ_stat.py'
Dec 10 19:41:39 compute-0 sudo[160343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:39 compute-0 python3.9[160345]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:41:39 compute-0 sudo[160343]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:40 compute-0 sudo[160466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vossjyjzsnowuepevguwbfnemlaeyvtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395699.358694-1332-217975846273572/AnsiballZ_copy.py'
Dec 10 19:41:40 compute-0 sudo[160466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:40 compute-0 python3.9[160468]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395699.358694-1332-217975846273572/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:40 compute-0 sudo[160466]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:41 compute-0 sudo[160618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boljpneaysuglwwjvmgjhlewbbcmojbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395700.7082944-1347-244800528497726/AnsiballZ_stat.py'
Dec 10 19:41:41 compute-0 sudo[160618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:41 compute-0 python3.9[160620]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:41:41 compute-0 sudo[160618]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:41 compute-0 sudo[160741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwihmtebphxvgkkhadfpepmmqqifsxtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395700.7082944-1347-244800528497726/AnsiballZ_copy.py'
Dec 10 19:41:41 compute-0 sudo[160741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:41 compute-0 python3.9[160743]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395700.7082944-1347-244800528497726/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:41:41 compute-0 sudo[160741]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:42 compute-0 sudo[160893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvdcioncwhmbbzbrkpxxsvqdimhrmlvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395701.9307458-1362-277826770136543/AnsiballZ_systemd.py'
Dec 10 19:41:42 compute-0 sudo[160893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:42 compute-0 python3.9[160895]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:41:42 compute-0 systemd[1]: Reloading.
Dec 10 19:41:42 compute-0 systemd-sysv-generator[160924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:41:42 compute-0 systemd-rc-local-generator[160920]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:41:42 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Dec 10 19:41:42 compute-0 sudo[160893]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:43 compute-0 sudo[161084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueysqqdrlfbjaxvamamghzwzutvbyshe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395703.1445553-1370-189486439159492/AnsiballZ_systemd.py'
Dec 10 19:41:43 compute-0 sudo[161084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:43 compute-0 python3.9[161086]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 10 19:41:43 compute-0 systemd[1]: Reloading.
Dec 10 19:41:43 compute-0 systemd-rc-local-generator[161112]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:41:43 compute-0 systemd-sysv-generator[161115]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:41:44 compute-0 systemd[1]: Reloading.
Dec 10 19:41:44 compute-0 systemd-sysv-generator[161150]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:41:44 compute-0 systemd-rc-local-generator[161146]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:41:44 compute-0 sudo[161084]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:44 compute-0 sshd-session[106684]: Connection closed by 192.168.122.30 port 33628
Dec 10 19:41:44 compute-0 sshd-session[106681]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:41:44 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Dec 10 19:41:44 compute-0 systemd[1]: session-23.scope: Consumed 3min 33.786s CPU time.
Dec 10 19:41:44 compute-0 systemd-logind[789]: Session 23 logged out. Waiting for processes to exit.
Dec 10 19:41:44 compute-0 systemd-logind[789]: Removed session 23.
Dec 10 19:41:50 compute-0 sshd-session[161185]: Accepted publickey for zuul from 192.168.122.30 port 43246 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:41:50 compute-0 systemd-logind[789]: New session 24 of user zuul.
Dec 10 19:41:50 compute-0 systemd[1]: Started Session 24 of User zuul.
Dec 10 19:41:50 compute-0 sshd-session[161185]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:41:51 compute-0 python3.9[161338]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:41:52 compute-0 python3.9[161492]: ansible-ansible.builtin.service_facts Invoked
Dec 10 19:41:52 compute-0 network[161509]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 10 19:41:52 compute-0 network[161510]: 'network-scripts' will be removed from distribution in near future.
Dec 10 19:41:52 compute-0 network[161511]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 10 19:41:55 compute-0 podman[161585]: 2025-12-10 19:41:55.280524226 +0000 UTC m=+0.071111306 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 10 19:41:57 compute-0 sudo[161798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biycsauteovifjjjlwyxbjrochczgpqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395717.4686437-47-30215645357682/AnsiballZ_setup.py'
Dec 10 19:41:57 compute-0 sudo[161798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:58 compute-0 python3.9[161800]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:41:58 compute-0 sudo[161798]: pam_unix(sudo:session): session closed for user root
Dec 10 19:41:58 compute-0 sudo[161882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbrtlehtekszwfbufqnafgigdprlzpob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395717.4686437-47-30215645357682/AnsiballZ_dnf.py'
Dec 10 19:41:58 compute-0 sudo[161882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:41:59 compute-0 python3.9[161884]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:42:04 compute-0 podman[161886]: 2025-12-10 19:42:04.169638098 +0000 UTC m=+0.146092729 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Dec 10 19:42:04 compute-0 sudo[161882]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:05 compute-0 sudo[162063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvejunbntnhxfloudkyjmptfsaisdafm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395724.6394918-59-22378422293597/AnsiballZ_stat.py'
Dec 10 19:42:05 compute-0 sudo[162063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:05 compute-0 python3.9[162065]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:42:05 compute-0 sudo[162063]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:06 compute-0 sudo[162215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbgyutesrhdhxyamdssxjutxynhfprhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395725.4541378-69-21766849820620/AnsiballZ_command.py'
Dec 10 19:42:06 compute-0 sudo[162215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:06 compute-0 python3.9[162217]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:42:06 compute-0 sudo[162215]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:07 compute-0 sudo[162368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlfcykpftvjvwrikieetybafibhkvoha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395726.8843455-79-25960729644316/AnsiballZ_stat.py'
Dec 10 19:42:07 compute-0 sudo[162368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:07 compute-0 python3.9[162370]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:42:07 compute-0 sudo[162368]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:07 compute-0 sudo[162520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzcmjtcpvxzhnhihjbhiepeqdzanqegz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395727.5260508-87-35157454851433/AnsiballZ_command.py'
Dec 10 19:42:07 compute-0 sudo[162520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:07 compute-0 python3.9[162522]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:42:07 compute-0 sudo[162520]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:08 compute-0 sudo[162673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqrgxcojhhwlxajzagbfnzugwuzmbkgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395728.134488-95-226829711242205/AnsiballZ_stat.py'
Dec 10 19:42:08 compute-0 sudo[162673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:08 compute-0 python3.9[162675]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:42:08 compute-0 sudo[162673]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:09 compute-0 sudo[162796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bamyxczwtydoscimscsdhgpcsgmthqln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395728.134488-95-226829711242205/AnsiballZ_copy.py'
Dec 10 19:42:09 compute-0 sudo[162796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:09 compute-0 python3.9[162798]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395728.134488-95-226829711242205/.source.iscsi _original_basename=.3sd1mjci follow=False checksum=c60ca9a1e0d59036d8942a0048d5244b65d8d132 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:09 compute-0 sudo[162796]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:09 compute-0 sudo[162948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqwwvjkwrvzagklgqaikcztwwenvowvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395729.5235412-110-246219276117535/AnsiballZ_file.py'
Dec 10 19:42:09 compute-0 sudo[162948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:10 compute-0 python3.9[162950]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:10 compute-0 sudo[162948]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:10 compute-0 sudo[163100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klrqjpljscjpblbjosbfnpsqawjrbtnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395730.3425763-118-179854338867824/AnsiballZ_lineinfile.py'
Dec 10 19:42:10 compute-0 sudo[163100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:10 compute-0 python3.9[163102]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:11 compute-0 sudo[163100]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:11 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 19:42:11 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 19:42:11 compute-0 sudo[163253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czlmkayhhscpmcnjbnokikeraantdpgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395731.2173567-127-71176793080950/AnsiballZ_systemd_service.py'
Dec 10 19:42:11 compute-0 sudo[163253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:12 compute-0 python3.9[163255]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:42:12 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 10 19:42:12 compute-0 sudo[163253]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:12 compute-0 sudo[163409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkrwlkxzszkfcxfcvpnohgexyzbgoecz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395732.3934648-135-53738060632606/AnsiballZ_systemd_service.py'
Dec 10 19:42:12 compute-0 sudo[163409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:12 compute-0 python3.9[163411]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:42:13 compute-0 systemd[1]: Reloading.
Dec 10 19:42:13 compute-0 systemd-rc-local-generator[163437]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:42:13 compute-0 systemd-sysv-generator[163445]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:42:13 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 10 19:42:13 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 10 19:42:13 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec 10 19:42:13 compute-0 systemd[1]: Started Open-iSCSI.
Dec 10 19:42:13 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 10 19:42:13 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 10 19:42:13 compute-0 sudo[163409]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:14 compute-0 sudo[163612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eggjejkmxtlkcudshsiybkxeyadplxye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395733.8012662-146-18463520867702/AnsiballZ_service_facts.py'
Dec 10 19:42:14 compute-0 sudo[163612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:14 compute-0 python3.9[163614]: ansible-ansible.builtin.service_facts Invoked
Dec 10 19:42:14 compute-0 network[163631]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 10 19:42:14 compute-0 network[163632]: 'network-scripts' will be removed from distribution in near future.
Dec 10 19:42:14 compute-0 network[163633]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 10 19:42:19 compute-0 sudo[163612]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:20 compute-0 sudo[163902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-intfgchwzyzsetukyvbpjcurxlquzoun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395740.1852689-156-182319808812599/AnsiballZ_file.py'
Dec 10 19:42:20 compute-0 sudo[163902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:20 compute-0 python3.9[163904]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 10 19:42:20 compute-0 sudo[163902]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:21 compute-0 sudo[164054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygqzrjpsvnqcothyakvmywsjfelpvnyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395741.0207918-164-1830442958375/AnsiballZ_modprobe.py'
Dec 10 19:42:21 compute-0 sudo[164054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:21 compute-0 python3.9[164056]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 10 19:42:21 compute-0 sudo[164054]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:22 compute-0 sudo[164210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfykkidxxirbwqhgbwthqnammukolhdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395741.9722207-172-120645262546580/AnsiballZ_stat.py'
Dec 10 19:42:22 compute-0 sudo[164210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:22 compute-0 python3.9[164212]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:42:22 compute-0 sudo[164210]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:22 compute-0 sudo[164333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkgbdepdhxoeaituckxjzossndnsstoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395741.9722207-172-120645262546580/AnsiballZ_copy.py'
Dec 10 19:42:22 compute-0 sudo[164333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:23 compute-0 python3.9[164335]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395741.9722207-172-120645262546580/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:23 compute-0 sudo[164333]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:42:23.350 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:42:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:42:23.351 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:42:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:42:23.352 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:42:23 compute-0 sudo[164485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vifclmiyizxkopzpgrfgpvhbbbcjsybn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395743.2839358-188-48768630256450/AnsiballZ_lineinfile.py'
Dec 10 19:42:23 compute-0 sudo[164485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:23 compute-0 python3.9[164487]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:23 compute-0 sudo[164485]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:24 compute-0 sudo[164637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqygeddkhudszpapgvcbgribixxoniwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395743.968929-196-70754681435155/AnsiballZ_systemd.py'
Dec 10 19:42:24 compute-0 sudo[164637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:24 compute-0 python3.9[164639]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:42:24 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 10 19:42:24 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 10 19:42:24 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 10 19:42:24 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 10 19:42:24 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 10 19:42:25 compute-0 sudo[164637]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:25 compute-0 sudo[164809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvrssjqwzokysctogusdavnnovzswraa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395745.2173748-204-215413336445136/AnsiballZ_file.py'
Dec 10 19:42:25 compute-0 podman[164767]: 2025-12-10 19:42:25.492605202 +0000 UTC m=+0.055677435 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 10 19:42:25 compute-0 sudo[164809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:25 compute-0 python3.9[164813]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:42:25 compute-0 sudo[164809]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:26 compute-0 sudo[164963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vunghlbzsmzdwggmzkkmfghtskigqoxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395745.8893201-213-151943599096124/AnsiballZ_stat.py'
Dec 10 19:42:26 compute-0 sudo[164963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:26 compute-0 python3.9[164965]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:42:26 compute-0 sudo[164963]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:26 compute-0 sudo[165115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etiiiicdovitumxlcavjnamibkovcnma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395746.4671772-222-236914395360243/AnsiballZ_stat.py'
Dec 10 19:42:26 compute-0 sudo[165115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:26 compute-0 python3.9[165117]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:42:26 compute-0 sudo[165115]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:27 compute-0 sudo[165267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdwgbdqrtjkpyiuteoffjfvhihfhruss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395747.1530194-230-13741849769186/AnsiballZ_stat.py'
Dec 10 19:42:27 compute-0 sudo[165267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:27 compute-0 python3.9[165269]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:42:27 compute-0 sudo[165267]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:27 compute-0 sudo[165390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yajlcbqwpcabqdnunrycdpfxfdbtihlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395747.1530194-230-13741849769186/AnsiballZ_copy.py'
Dec 10 19:42:27 compute-0 sudo[165390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:28 compute-0 python3.9[165392]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395747.1530194-230-13741849769186/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:28 compute-0 sudo[165390]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:28 compute-0 sudo[165542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krgjlpdmsamwzoisbxgsufxgzvpxrtlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395748.345971-245-71602826282014/AnsiballZ_command.py'
Dec 10 19:42:28 compute-0 sudo[165542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:28 compute-0 python3.9[165544]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:42:28 compute-0 sudo[165542]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:29 compute-0 sudo[165695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtnpqyvmzovjfcfzthecyiibusyulnbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395748.9790099-253-101449627021211/AnsiballZ_lineinfile.py'
Dec 10 19:42:29 compute-0 sudo[165695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:29 compute-0 python3.9[165697]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:29 compute-0 sudo[165695]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:30 compute-0 sudo[165847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbfdnrlxxthkoxywhacjyjzuxhrlssvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395749.6077583-261-205465559954716/AnsiballZ_replace.py'
Dec 10 19:42:30 compute-0 sudo[165847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:30 compute-0 python3.9[165849]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:30 compute-0 sudo[165847]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:30 compute-0 sudo[165999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipnowblcokbqbisfzbpqtqpkfbcgdwoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395750.4174085-269-76166091651188/AnsiballZ_replace.py'
Dec 10 19:42:30 compute-0 sudo[165999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:30 compute-0 python3.9[166001]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:30 compute-0 sudo[165999]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:31 compute-0 sudo[166151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cctlkxjrdearrulupmaztrxoswpceunh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395751.2205613-278-210326325405821/AnsiballZ_lineinfile.py'
Dec 10 19:42:31 compute-0 sudo[166151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:31 compute-0 python3.9[166153]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:31 compute-0 sudo[166151]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:32 compute-0 sudo[166303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aetmijetbwkujkdnfxzhmsihssqzjhqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395751.8912144-278-41414425651419/AnsiballZ_lineinfile.py'
Dec 10 19:42:32 compute-0 sudo[166303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:32 compute-0 python3.9[166305]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:32 compute-0 sudo[166303]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:32 compute-0 sudo[166455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpuuszfymcnsokboiopmqialfmnudrpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395752.5499-278-177919466203136/AnsiballZ_lineinfile.py'
Dec 10 19:42:32 compute-0 sudo[166455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:33 compute-0 python3.9[166457]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:33 compute-0 sudo[166455]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:33 compute-0 sudo[166607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdjjzvbyptjznejqvzjzbemsqgnhedxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395753.1673064-278-196888697026017/AnsiballZ_lineinfile.py'
Dec 10 19:42:33 compute-0 sudo[166607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:33 compute-0 python3.9[166609]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:33 compute-0 sudo[166607]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:34 compute-0 sudo[166759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxjhyoslrxlxakoryplnhefqwklhrqtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395753.8615558-307-35908088982021/AnsiballZ_stat.py'
Dec 10 19:42:34 compute-0 sudo[166759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:34 compute-0 python3.9[166761]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:42:34 compute-0 sudo[166759]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:34 compute-0 sudo[166922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlzvxxaqrdalxwjzxnegjtscoktcprjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395754.5783494-315-41270044815838/AnsiballZ_file.py'
Dec 10 19:42:34 compute-0 sudo[166922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:34 compute-0 podman[166887]: 2025-12-10 19:42:34.912045347 +0000 UTC m=+0.088851151 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 10 19:42:35 compute-0 python3.9[166932]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:35 compute-0 sudo[166922]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:35 compute-0 sudo[167091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smhhrelolvbuefnobugzcumijaedyvow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395755.2974026-324-269908589516307/AnsiballZ_file.py'
Dec 10 19:42:35 compute-0 sudo[167091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:35 compute-0 python3.9[167093]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:42:35 compute-0 sudo[167091]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:36 compute-0 sudo[167243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecvhrwfmljqtqtdjbstimmijtrhnssrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395755.893437-332-173183021180252/AnsiballZ_stat.py'
Dec 10 19:42:36 compute-0 sudo[167243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:36 compute-0 python3.9[167245]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:42:36 compute-0 sudo[167243]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:36 compute-0 sudo[167321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpqrqpbqallbeghbzlaouaftvtzahhdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395755.893437-332-173183021180252/AnsiballZ_file.py'
Dec 10 19:42:36 compute-0 sudo[167321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:36 compute-0 python3.9[167323]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:42:36 compute-0 sudo[167321]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:37 compute-0 sudo[167473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmzjyznvimlqjbsexzntxysdnashshxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395756.847382-332-166019046890468/AnsiballZ_stat.py'
Dec 10 19:42:37 compute-0 sudo[167473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:37 compute-0 python3.9[167475]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:42:37 compute-0 sudo[167473]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:37 compute-0 sudo[167551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjlpusliaianklfgtsaqqonybtdqoiyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395756.847382-332-166019046890468/AnsiballZ_file.py'
Dec 10 19:42:37 compute-0 sudo[167551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:37 compute-0 python3.9[167553]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:42:37 compute-0 sudo[167551]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:38 compute-0 sudo[167703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvxwobaivnuodwjrxyrkoicrttsrkqvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395757.894604-355-155131984705629/AnsiballZ_file.py'
Dec 10 19:42:38 compute-0 sudo[167703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:38 compute-0 python3.9[167705]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:38 compute-0 sudo[167703]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:38 compute-0 sudo[167855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujsifiiferoyfqezgoakslrjdjpzlfik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395758.522556-363-115043923564106/AnsiballZ_stat.py'
Dec 10 19:42:38 compute-0 sudo[167855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:38 compute-0 python3.9[167857]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:42:38 compute-0 sudo[167855]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:39 compute-0 sudo[167933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cphxyrttxnivusryynpovthxibfalkpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395758.522556-363-115043923564106/AnsiballZ_file.py'
Dec 10 19:42:39 compute-0 sudo[167933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:39 compute-0 python3.9[167935]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:39 compute-0 sudo[167933]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:39 compute-0 sudo[168085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prpvavdkomqpcvxfgbssflozxxrxzmbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395759.5296707-375-7074280342920/AnsiballZ_stat.py'
Dec 10 19:42:39 compute-0 sudo[168085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:39 compute-0 python3.9[168087]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:42:39 compute-0 sudo[168085]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:40 compute-0 sudo[168163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaftunwkawwwmqgfgkblllghpodwbmlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395759.5296707-375-7074280342920/AnsiballZ_file.py'
Dec 10 19:42:40 compute-0 sudo[168163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:40 compute-0 python3.9[168165]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:40 compute-0 sudo[168163]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:40 compute-0 sudo[168315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnphpcfzvkjrepaquyjuapmmrzlzatfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395760.5872135-387-137932622158074/AnsiballZ_systemd.py'
Dec 10 19:42:40 compute-0 sudo[168315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:41 compute-0 python3.9[168317]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:42:41 compute-0 systemd[1]: Reloading.
Dec 10 19:42:41 compute-0 systemd-rc-local-generator[168347]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:42:41 compute-0 systemd-sysv-generator[168350]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:42:41 compute-0 sudo[168315]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:42 compute-0 sudo[168504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owfspqweasasixodvyzcyemplkjwaogb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395761.816047-395-190987931032380/AnsiballZ_stat.py'
Dec 10 19:42:42 compute-0 sudo[168504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:42 compute-0 python3.9[168506]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:42:42 compute-0 sudo[168504]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:42 compute-0 sudo[168582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogkbzwnvmtytqudeftcwjpsiuaylapkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395761.816047-395-190987931032380/AnsiballZ_file.py'
Dec 10 19:42:42 compute-0 sudo[168582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:42 compute-0 python3.9[168584]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:42 compute-0 sudo[168582]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:43 compute-0 sudo[168734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptindgjggesucjnbtqbedgfmkbwzthiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395763.1064005-407-16290350927451/AnsiballZ_stat.py'
Dec 10 19:42:43 compute-0 sudo[168734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:43 compute-0 python3.9[168736]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:42:43 compute-0 sudo[168734]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:43 compute-0 sudo[168812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmrqazlxutwwarldnirakulubwyyirle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395763.1064005-407-16290350927451/AnsiballZ_file.py'
Dec 10 19:42:43 compute-0 sudo[168812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:44 compute-0 python3.9[168814]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:44 compute-0 sudo[168812]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:44 compute-0 sudo[168964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izzrdejcdfxfsblvjwmxvxfaicufmnzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395764.3296883-419-197768961134124/AnsiballZ_systemd.py'
Dec 10 19:42:44 compute-0 sudo[168964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:44 compute-0 python3.9[168966]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:42:44 compute-0 systemd[1]: Reloading.
Dec 10 19:42:45 compute-0 systemd-rc-local-generator[168994]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:42:45 compute-0 systemd-sysv-generator[168997]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:42:45 compute-0 systemd[1]: Starting Create netns directory...
Dec 10 19:42:45 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 10 19:42:45 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 10 19:42:45 compute-0 systemd[1]: Finished Create netns directory.
Dec 10 19:42:45 compute-0 sudo[168964]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:45 compute-0 sudo[169157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywvikjyrbvfqzrcdiqiloueezuzjikur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395765.5351198-429-6388128712812/AnsiballZ_file.py'
Dec 10 19:42:45 compute-0 sudo[169157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:46 compute-0 python3.9[169159]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:42:46 compute-0 sudo[169157]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:46 compute-0 sudo[169309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moyzkmjqkjtcpiszpbwfkwsthrbusbgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395766.200409-437-122536093011382/AnsiballZ_stat.py'
Dec 10 19:42:46 compute-0 sudo[169309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:46 compute-0 python3.9[169311]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:42:46 compute-0 sudo[169309]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:47 compute-0 sudo[169432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixyyktnbdhymeklrtcspdwsdjapnlnmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395766.200409-437-122536093011382/AnsiballZ_copy.py'
Dec 10 19:42:47 compute-0 sudo[169432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:47 compute-0 python3.9[169434]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395766.200409-437-122536093011382/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:42:47 compute-0 sudo[169432]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:47 compute-0 sudo[169584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nazsseivarabuctccjxdovtgeocmmbge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395767.54367-454-224184603518545/AnsiballZ_file.py'
Dec 10 19:42:47 compute-0 sudo[169584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:47 compute-0 python3.9[169586]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:42:47 compute-0 sudo[169584]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:48 compute-0 sudo[169736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egygbbohpqvhvkbzpzcrgrpslvueknya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395768.1686687-462-37550013724335/AnsiballZ_stat.py'
Dec 10 19:42:48 compute-0 sudo[169736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:48 compute-0 python3.9[169738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:42:48 compute-0 sudo[169736]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:49 compute-0 sudo[169859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekihvycownqgzoloinaehcnyfvekqled ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395768.1686687-462-37550013724335/AnsiballZ_copy.py'
Dec 10 19:42:49 compute-0 sudo[169859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:49 compute-0 python3.9[169861]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395768.1686687-462-37550013724335/.source.json _original_basename=.6cd8ry6y follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:49 compute-0 sudo[169859]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:49 compute-0 sudo[170011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enpdyuzokzmtpdhmrlicmtytmxyaumhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395769.6233451-477-258546219814671/AnsiballZ_file.py'
Dec 10 19:42:49 compute-0 sudo[170011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:50 compute-0 python3.9[170013]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:50 compute-0 sudo[170011]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:50 compute-0 sudo[170163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztyjfrvcuqdctyiwjvxkwsgrdbqdshju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395770.3458552-485-79803578180350/AnsiballZ_stat.py'
Dec 10 19:42:50 compute-0 sudo[170163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:50 compute-0 sudo[170163]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:51 compute-0 sudo[170286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wipeejaahsdszeaqkwphbnvispdaoflb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395770.3458552-485-79803578180350/AnsiballZ_copy.py'
Dec 10 19:42:51 compute-0 sudo[170286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:51 compute-0 sudo[170286]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:52 compute-0 sudo[170438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbfyounjtblijwocdcjumrmgrulrrjqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395771.7190104-502-151931837211735/AnsiballZ_container_config_data.py'
Dec 10 19:42:52 compute-0 sudo[170438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:52 compute-0 python3.9[170440]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 10 19:42:52 compute-0 sudo[170438]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:53 compute-0 sudo[170590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvehkwrjlusctqwjpzhpgyuaxfughgsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395772.6757522-511-163778676960051/AnsiballZ_container_config_hash.py'
Dec 10 19:42:53 compute-0 sudo[170590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:53 compute-0 python3.9[170592]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 10 19:42:53 compute-0 sudo[170590]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:53 compute-0 sudo[170742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmcjkdfgtubytvukbkdmfsseyljruwvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395773.5219553-520-248333657739225/AnsiballZ_podman_container_info.py'
Dec 10 19:42:53 compute-0 sudo[170742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:54 compute-0 python3.9[170744]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 10 19:42:54 compute-0 sudo[170742]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:55 compute-0 sudo[170920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibnyhvxgowkyctoghzxkzjfwownqnhzq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765395774.7493641-533-91520390683252/AnsiballZ_edpm_container_manage.py'
Dec 10 19:42:55 compute-0 sudo[170920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:55 compute-0 python3[170922]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 10 19:42:55 compute-0 podman[170956]: 2025-12-10 19:42:55.654618903 +0000 UTC m=+0.052105708 container create b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:42:55 compute-0 podman[170956]: 2025-12-10 19:42:55.626956096 +0000 UTC m=+0.024442951 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 10 19:42:55 compute-0 python3[170922]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 10 19:42:55 compute-0 sudo[170920]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:56 compute-0 podman[171049]: 2025-12-10 19:42:56.08233948 +0000 UTC m=+0.060085054 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 10 19:42:56 compute-0 sudo[171162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgeokxufjbzcbcgfketdwwpryzqucvcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395775.967407-541-227806958101046/AnsiballZ_stat.py'
Dec 10 19:42:56 compute-0 sudo[171162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:56 compute-0 python3.9[171164]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:42:56 compute-0 sudo[171162]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:56 compute-0 sudo[171316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elhuqkoqbufrzqhqwpwhiedulspvzska ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395776.7347915-550-255661243640987/AnsiballZ_file.py'
Dec 10 19:42:56 compute-0 sudo[171316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:57 compute-0 python3.9[171318]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:57 compute-0 sudo[171316]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:57 compute-0 sudo[171392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btkwhkdpzfntdyakeiapitjaqlkkgbtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395776.7347915-550-255661243640987/AnsiballZ_stat.py'
Dec 10 19:42:57 compute-0 sudo[171392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:57 compute-0 python3.9[171394]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:42:57 compute-0 sudo[171392]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:58 compute-0 sudo[171543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfniluymyrgbdnwklwsqdokdvikfczzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395777.644458-550-141424416788622/AnsiballZ_copy.py'
Dec 10 19:42:58 compute-0 sudo[171543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:58 compute-0 python3.9[171545]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765395777.644458-550-141424416788622/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:42:58 compute-0 sudo[171543]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:58 compute-0 sudo[171619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pousryxllklacqhjtfhbsrhbkdclnttr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395777.644458-550-141424416788622/AnsiballZ_systemd.py'
Dec 10 19:42:58 compute-0 sudo[171619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:58 compute-0 python3.9[171621]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:42:58 compute-0 systemd[1]: Reloading.
Dec 10 19:42:58 compute-0 systemd-rc-local-generator[171648]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:42:58 compute-0 systemd-sysv-generator[171651]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:42:59 compute-0 sudo[171619]: pam_unix(sudo:session): session closed for user root
Dec 10 19:42:59 compute-0 sudo[171730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbumniqyumshoidpovbokzszprxwspvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395777.644458-550-141424416788622/AnsiballZ_systemd.py'
Dec 10 19:42:59 compute-0 sudo[171730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:42:59 compute-0 python3.9[171732]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:42:59 compute-0 systemd[1]: Reloading.
Dec 10 19:42:59 compute-0 systemd-rc-local-generator[171762]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:42:59 compute-0 systemd-sysv-generator[171765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:42:59 compute-0 systemd[1]: Starting multipathd container...
Dec 10 19:42:59 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2913bc101f74f0a52fe6ba34b482a76d202cb1fd0a4cf16379010cb0018449/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 10 19:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2913bc101f74f0a52fe6ba34b482a76d202cb1fd0a4cf16379010cb0018449/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 10 19:42:59 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7.
Dec 10 19:42:59 compute-0 podman[171773]: 2025-12-10 19:42:59.996263437 +0000 UTC m=+0.113101697 container init b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 10 19:43:00 compute-0 multipathd[171787]: + sudo -E kolla_set_configs
Dec 10 19:43:00 compute-0 sudo[171793]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 10 19:43:00 compute-0 podman[171773]: 2025-12-10 19:43:00.026174744 +0000 UTC m=+0.143012994 container start b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 10 19:43:00 compute-0 sudo[171793]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 10 19:43:00 compute-0 sudo[171793]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 10 19:43:00 compute-0 podman[171773]: multipathd
Dec 10 19:43:00 compute-0 systemd[1]: Started multipathd container.
Dec 10 19:43:00 compute-0 sudo[171730]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:00 compute-0 multipathd[171787]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 10 19:43:00 compute-0 multipathd[171787]: INFO:__main__:Validating config file
Dec 10 19:43:00 compute-0 multipathd[171787]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 10 19:43:00 compute-0 multipathd[171787]: INFO:__main__:Writing out command to execute
Dec 10 19:43:00 compute-0 sudo[171793]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:00 compute-0 multipathd[171787]: ++ cat /run_command
Dec 10 19:43:00 compute-0 multipathd[171787]: + CMD='/usr/sbin/multipathd -d'
Dec 10 19:43:00 compute-0 multipathd[171787]: + ARGS=
Dec 10 19:43:00 compute-0 multipathd[171787]: + sudo kolla_copy_cacerts
Dec 10 19:43:00 compute-0 sudo[171815]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 10 19:43:00 compute-0 sudo[171815]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 10 19:43:00 compute-0 sudo[171815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 10 19:43:00 compute-0 podman[171794]: 2025-12-10 19:43:00.099615509 +0000 UTC m=+0.060069704 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 10 19:43:00 compute-0 sudo[171815]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:00 compute-0 multipathd[171787]: + [[ ! -n '' ]]
Dec 10 19:43:00 compute-0 multipathd[171787]: + . kolla_extend_start
Dec 10 19:43:00 compute-0 multipathd[171787]: Running command: '/usr/sbin/multipathd -d'
Dec 10 19:43:00 compute-0 multipathd[171787]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 10 19:43:00 compute-0 multipathd[171787]: + umask 0022
Dec 10 19:43:00 compute-0 multipathd[171787]: + exec /usr/sbin/multipathd -d
Dec 10 19:43:00 compute-0 systemd[1]: b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7-4affe2859566195a.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 19:43:00 compute-0 systemd[1]: b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7-4affe2859566195a.service: Failed with result 'exit-code'.
Dec 10 19:43:00 compute-0 multipathd[171787]: 3109.209726 | --------start up--------
Dec 10 19:43:00 compute-0 multipathd[171787]: 3109.209740 | read /etc/multipath.conf
Dec 10 19:43:00 compute-0 multipathd[171787]: 3109.214560 | path checkers start up
Dec 10 19:43:00 compute-0 python3.9[171976]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:43:01 compute-0 sudo[172128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tymbwjtrrymhqeanhmaswxfyygqiydbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395780.9572606-586-237824834877133/AnsiballZ_command.py'
Dec 10 19:43:01 compute-0 sudo[172128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:01 compute-0 python3.9[172130]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:43:01 compute-0 sudo[172128]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:01 compute-0 sudo[172293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gldtfudhlvvvqwyovyzbmaksprxosqka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395781.6297476-594-65473571466114/AnsiballZ_systemd.py'
Dec 10 19:43:01 compute-0 sudo[172293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:02 compute-0 python3.9[172295]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:43:02 compute-0 systemd[1]: Stopping multipathd container...
Dec 10 19:43:02 compute-0 multipathd[171787]: 3111.360954 | exit (signal)
Dec 10 19:43:02 compute-0 multipathd[171787]: 3111.361101 | --------shut down-------
Dec 10 19:43:02 compute-0 systemd[1]: libpod-b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7.scope: Deactivated successfully.
Dec 10 19:43:02 compute-0 podman[172299]: 2025-12-10 19:43:02.30038552 +0000 UTC m=+0.072942273 container died b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd)
Dec 10 19:43:02 compute-0 systemd[1]: b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7-4affe2859566195a.timer: Deactivated successfully.
Dec 10 19:43:02 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7.
Dec 10 19:43:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7-userdata-shm.mount: Deactivated successfully.
Dec 10 19:43:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f2913bc101f74f0a52fe6ba34b482a76d202cb1fd0a4cf16379010cb0018449-merged.mount: Deactivated successfully.
Dec 10 19:43:02 compute-0 podman[172299]: 2025-12-10 19:43:02.364911683 +0000 UTC m=+0.137468456 container cleanup b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd)
Dec 10 19:43:02 compute-0 podman[172299]: multipathd
Dec 10 19:43:02 compute-0 podman[172325]: multipathd
Dec 10 19:43:02 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 10 19:43:02 compute-0 systemd[1]: Stopped multipathd container.
Dec 10 19:43:02 compute-0 systemd[1]: Starting multipathd container...
Dec 10 19:43:02 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2913bc101f74f0a52fe6ba34b482a76d202cb1fd0a4cf16379010cb0018449/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 10 19:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2913bc101f74f0a52fe6ba34b482a76d202cb1fd0a4cf16379010cb0018449/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 10 19:43:02 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7.
Dec 10 19:43:02 compute-0 podman[172338]: 2025-12-10 19:43:02.56173335 +0000 UTC m=+0.102399377 container init b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 19:43:02 compute-0 multipathd[172354]: + sudo -E kolla_set_configs
Dec 10 19:43:02 compute-0 sudo[172360]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 10 19:43:02 compute-0 sudo[172360]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 10 19:43:02 compute-0 sudo[172360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 10 19:43:02 compute-0 podman[172338]: 2025-12-10 19:43:02.597785595 +0000 UTC m=+0.138451592 container start b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 10 19:43:02 compute-0 podman[172338]: multipathd
Dec 10 19:43:02 compute-0 systemd[1]: Started multipathd container.
Dec 10 19:43:02 compute-0 multipathd[172354]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 10 19:43:02 compute-0 multipathd[172354]: INFO:__main__:Validating config file
Dec 10 19:43:02 compute-0 multipathd[172354]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 10 19:43:02 compute-0 multipathd[172354]: INFO:__main__:Writing out command to execute
Dec 10 19:43:02 compute-0 sudo[172360]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:02 compute-0 multipathd[172354]: ++ cat /run_command
Dec 10 19:43:02 compute-0 sudo[172293]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:02 compute-0 multipathd[172354]: + CMD='/usr/sbin/multipathd -d'
Dec 10 19:43:02 compute-0 multipathd[172354]: + ARGS=
Dec 10 19:43:02 compute-0 multipathd[172354]: + sudo kolla_copy_cacerts
Dec 10 19:43:02 compute-0 sudo[172377]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 10 19:43:02 compute-0 sudo[172377]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 10 19:43:02 compute-0 sudo[172377]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 10 19:43:02 compute-0 sudo[172377]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:02 compute-0 multipathd[172354]: Running command: '/usr/sbin/multipathd -d'
Dec 10 19:43:02 compute-0 multipathd[172354]: + [[ ! -n '' ]]
Dec 10 19:43:02 compute-0 multipathd[172354]: + . kolla_extend_start
Dec 10 19:43:02 compute-0 multipathd[172354]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 10 19:43:02 compute-0 multipathd[172354]: + umask 0022
Dec 10 19:43:02 compute-0 multipathd[172354]: + exec /usr/sbin/multipathd -d
Dec 10 19:43:02 compute-0 podman[172361]: 2025-12-10 19:43:02.68053126 +0000 UTC m=+0.066098396 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 19:43:02 compute-0 multipathd[172354]: 3111.780236 | --------start up--------
Dec 10 19:43:02 compute-0 multipathd[172354]: 3111.780256 | read /etc/multipath.conf
Dec 10 19:43:02 compute-0 systemd[1]: b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7-7892dbdff700b0a0.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 19:43:02 compute-0 systemd[1]: b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7-7892dbdff700b0a0.service: Failed with result 'exit-code'.
Dec 10 19:43:02 compute-0 multipathd[172354]: 3111.787404 | path checkers start up
Dec 10 19:43:03 compute-0 sudo[172543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egqpgcpolsfyhsncfibanoyxnrytulgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395782.8145957-602-255869589329572/AnsiballZ_file.py'
Dec 10 19:43:03 compute-0 sudo[172543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:03 compute-0 python3.9[172545]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:03 compute-0 sudo[172543]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:03 compute-0 sudo[172695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hizotvmvuqiuzgcbwtvxgjpddvfewwqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395783.6106849-614-266535772942866/AnsiballZ_file.py'
Dec 10 19:43:03 compute-0 sudo[172695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:04 compute-0 python3.9[172697]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 10 19:43:04 compute-0 sudo[172695]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:04 compute-0 sudo[172847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozckxufkjjyhsdpvkbnlrrrcrjbrifzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395784.2782514-622-159281026145522/AnsiballZ_modprobe.py'
Dec 10 19:43:04 compute-0 sudo[172847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:04 compute-0 python3.9[172849]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 10 19:43:04 compute-0 kernel: Key type psk registered
Dec 10 19:43:04 compute-0 sudo[172847]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:05 compute-0 podman[172928]: 2025-12-10 19:43:05.093815163 +0000 UTC m=+0.075277715 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 10 19:43:05 compute-0 sudo[173037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxlvcsufmyuxxfpupzjnmjnelnulnpvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395784.9591014-630-67771045462862/AnsiballZ_stat.py'
Dec 10 19:43:05 compute-0 sudo[173037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:05 compute-0 python3.9[173039]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:43:05 compute-0 sudo[173037]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:05 compute-0 sudo[173160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmmrogohqymdtnunxffqpkesbhmvzpwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395784.9591014-630-67771045462862/AnsiballZ_copy.py'
Dec 10 19:43:05 compute-0 sudo[173160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:05 compute-0 python3.9[173162]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395784.9591014-630-67771045462862/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:05 compute-0 sudo[173160]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:06 compute-0 sudo[173312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzpzzlbqjjqpupvvyfbwhtgauhabebbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395786.1797898-646-168312148093538/AnsiballZ_lineinfile.py'
Dec 10 19:43:06 compute-0 sudo[173312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:06 compute-0 python3.9[173314]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:06 compute-0 sudo[173312]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:07 compute-0 sudo[173464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zysqclzgiczjcpfoefhqdqqidkbasmwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395786.7741642-654-111333303624567/AnsiballZ_systemd.py'
Dec 10 19:43:07 compute-0 sudo[173464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:07 compute-0 python3.9[173466]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:43:07 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 10 19:43:07 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 10 19:43:07 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 10 19:43:07 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 10 19:43:07 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 10 19:43:07 compute-0 sudo[173464]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:07 compute-0 sudo[173620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pujismmmicuunsohkvauvhzsyspklktc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395787.6315684-662-169344028330642/AnsiballZ_dnf.py'
Dec 10 19:43:07 compute-0 sudo[173620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:08 compute-0 python3.9[173622]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:43:10 compute-0 systemd[1]: Reloading.
Dec 10 19:43:10 compute-0 systemd-rc-local-generator[173655]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:43:10 compute-0 systemd-sysv-generator[173659]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:43:10 compute-0 systemd[1]: Reloading.
Dec 10 19:43:10 compute-0 systemd-rc-local-generator[173683]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:43:10 compute-0 systemd-sysv-generator[173689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:43:11 compute-0 systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 10 19:43:11 compute-0 systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 10 19:43:11 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 10 19:43:11 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 10 19:43:11 compute-0 systemd[1]: Reloading.
Dec 10 19:43:11 compute-0 systemd-rc-local-generator[173783]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:43:11 compute-0 systemd-sysv-generator[173786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:43:11 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 10 19:43:12 compute-0 sudo[173620]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:12 compute-0 sudo[175011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfsnntlmdlemdkqdyrpbdndfbtggxwbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395792.2240853-670-72507341006916/AnsiballZ_systemd_service.py'
Dec 10 19:43:12 compute-0 sudo[175011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:12 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 10 19:43:12 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 10 19:43:12 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.564s CPU time.
Dec 10 19:43:12 compute-0 systemd[1]: run-r130505ac337946d4b273893d88fe063b.service: Deactivated successfully.
Dec 10 19:43:12 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 10 19:43:12 compute-0 python3.9[175030]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:43:12 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec 10 19:43:12 compute-0 iscsid[163453]: iscsid shutting down.
Dec 10 19:43:12 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec 10 19:43:12 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec 10 19:43:12 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 10 19:43:12 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 10 19:43:12 compute-0 systemd[1]: Started Open-iSCSI.
Dec 10 19:43:12 compute-0 sudo[175011]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:13 compute-0 python3.9[175241]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:43:13 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 10 19:43:14 compute-0 sudo[175396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npejfiuobdnnmcrrodueijwsbflimwwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395794.1802607-688-125992006739374/AnsiballZ_file.py'
Dec 10 19:43:14 compute-0 sudo[175396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:14 compute-0 python3.9[175398]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:14 compute-0 sudo[175396]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:15 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 10 19:43:15 compute-0 sudo[175549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvnvwultkdtpqrxojbpsczglbjfddusc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395795.0122066-699-217257379968183/AnsiballZ_systemd_service.py'
Dec 10 19:43:15 compute-0 sudo[175549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:15 compute-0 python3.9[175551]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:43:15 compute-0 systemd[1]: Reloading.
Dec 10 19:43:15 compute-0 systemd-rc-local-generator[175578]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:43:15 compute-0 systemd-sysv-generator[175582]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:43:15 compute-0 sudo[175549]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:16 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 10 19:43:16 compute-0 python3.9[175736]: ansible-ansible.builtin.service_facts Invoked
Dec 10 19:43:16 compute-0 network[175753]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 10 19:43:16 compute-0 network[175754]: 'network-scripts' will be removed from distribution in near future.
Dec 10 19:43:16 compute-0 network[175755]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 10 19:43:21 compute-0 sudo[176027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnvpghzxpfnoxmzagisodulwfwmjclvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395801.2180057-718-11909479830825/AnsiballZ_systemd_service.py'
Dec 10 19:43:21 compute-0 sudo[176027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:21 compute-0 python3.9[176029]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:43:21 compute-0 sudo[176027]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:22 compute-0 sudo[176180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoksivsrlxvaoybxugasgtdmjmdeidkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395801.9552426-718-222669224106487/AnsiballZ_systemd_service.py'
Dec 10 19:43:22 compute-0 sudo[176180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:22 compute-0 python3.9[176182]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:43:22 compute-0 sudo[176180]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:23 compute-0 sudo[176333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgkothimggwsfnvyurvoehaxsesfuydx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395802.721364-718-73404195500787/AnsiballZ_systemd_service.py'
Dec 10 19:43:23 compute-0 sudo[176333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:23 compute-0 python3.9[176335]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:43:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:43:23.351 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:43:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:43:23.352 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:43:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:43:23.352 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:43:23 compute-0 sudo[176333]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:23 compute-0 sudo[176486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrunmqgpughaygzdxjhrerrwbmzorkbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395803.5105526-718-250799266738965/AnsiballZ_systemd_service.py'
Dec 10 19:43:23 compute-0 sudo[176486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:24 compute-0 python3.9[176488]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:43:24 compute-0 sudo[176486]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:24 compute-0 sudo[176641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpyuyrbblfeqhchbcjpllxmbfuxdbfrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395804.2631161-718-190728974664213/AnsiballZ_systemd_service.py'
Dec 10 19:43:24 compute-0 sudo[176641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:24 compute-0 sshd-session[176489]: Received disconnect from 193.46.255.217 port 40642:11:  [preauth]
Dec 10 19:43:24 compute-0 sshd-session[176489]: Disconnected from authenticating user root 193.46.255.217 port 40642 [preauth]
Dec 10 19:43:24 compute-0 python3.9[176643]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:43:24 compute-0 sudo[176641]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:25 compute-0 sudo[176794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-safcrqiefbbyiufsohohcvjfmuaeputd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395804.9431555-718-9895849778873/AnsiballZ_systemd_service.py'
Dec 10 19:43:25 compute-0 sudo[176794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:25 compute-0 python3.9[176796]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:43:25 compute-0 sudo[176794]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:26 compute-0 sudo[176947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxpjcfhggmwrshuoqrufgupjvktlxbko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395805.7514315-718-174592067980455/AnsiballZ_systemd_service.py'
Dec 10 19:43:26 compute-0 sudo[176947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:26 compute-0 python3.9[176949]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:43:26 compute-0 sudo[176947]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:26 compute-0 podman[176951]: 2025-12-10 19:43:26.409259545 +0000 UTC m=+0.073901647 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 10 19:43:26 compute-0 sudo[177119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqqwvznlwswpxoagdfhzpswyfylkxafd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395806.4861472-718-188074161165662/AnsiballZ_systemd_service.py'
Dec 10 19:43:26 compute-0 sudo[177119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:27 compute-0 python3.9[177121]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:43:27 compute-0 sudo[177119]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:27 compute-0 sudo[177272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldbbybanzgpoqfwcnidrgdxxcevitqrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395807.4103565-777-234119612501142/AnsiballZ_file.py'
Dec 10 19:43:27 compute-0 sudo[177272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:27 compute-0 python3.9[177274]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:27 compute-0 sudo[177272]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:28 compute-0 sudo[177424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzeawtiebojidlpcxetqektcjjlrjzvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395808.1198409-777-11947492334185/AnsiballZ_file.py'
Dec 10 19:43:28 compute-0 sudo[177424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:28 compute-0 python3.9[177426]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:28 compute-0 sudo[177424]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:29 compute-0 sudo[177576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyebdzldlvqwzglforuzljkuhyxznwvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395808.7964852-777-6457997539386/AnsiballZ_file.py'
Dec 10 19:43:29 compute-0 sudo[177576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:29 compute-0 python3.9[177578]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:29 compute-0 sudo[177576]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:29 compute-0 sudo[177728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocnvxvgpcedzkszrzgushpgjlrgnhglg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395809.4822972-777-79675775844424/AnsiballZ_file.py'
Dec 10 19:43:29 compute-0 sudo[177728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:30 compute-0 python3.9[177730]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:30 compute-0 sudo[177728]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:30 compute-0 sudo[177880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvnuqyxfmdnhrdhltxrpooobtsltausc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395810.1603708-777-24300058274825/AnsiballZ_file.py'
Dec 10 19:43:30 compute-0 sudo[177880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:30 compute-0 python3.9[177882]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:30 compute-0 sudo[177880]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:31 compute-0 sudo[178032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wptktgjyqjcbiozmtskqcbnxhmaiazxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395810.8930476-777-94769938632543/AnsiballZ_file.py'
Dec 10 19:43:31 compute-0 sudo[178032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:31 compute-0 python3.9[178034]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:31 compute-0 sudo[178032]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:31 compute-0 sudo[178184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vevbcantflozwojxokrfzbtqevmpurwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395811.6152635-777-117561684806825/AnsiballZ_file.py'
Dec 10 19:43:31 compute-0 sudo[178184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:32 compute-0 python3.9[178186]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:32 compute-0 sudo[178184]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:32 compute-0 sudo[178336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opvdvmdlpxuyhtdnaowrxchqauoruije ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395812.2778122-777-239126555484040/AnsiballZ_file.py'
Dec 10 19:43:32 compute-0 sudo[178336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:32 compute-0 python3.9[178338]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:32 compute-0 sudo[178336]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:33 compute-0 podman[178386]: 2025-12-10 19:43:33.097490149 +0000 UTC m=+0.073095425 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 10 19:43:33 compute-0 sudo[178508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzxtvncldbnhsfbjxsnwqsnauqtrapxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395812.9849854-834-203789393259173/AnsiballZ_file.py'
Dec 10 19:43:33 compute-0 sudo[178508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:33 compute-0 python3.9[178510]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:33 compute-0 sudo[178508]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:34 compute-0 sudo[178660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjfbcqjzryebwxsdarmmemlrbsxcoeus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395813.7067072-834-65586406366794/AnsiballZ_file.py'
Dec 10 19:43:34 compute-0 sudo[178660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:34 compute-0 python3.9[178662]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:34 compute-0 sudo[178660]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:34 compute-0 sudo[178812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocboiaywbtwdmusefxpevfwqxqenpulj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395814.3548489-834-205983078389638/AnsiballZ_file.py'
Dec 10 19:43:34 compute-0 sudo[178812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:34 compute-0 python3.9[178814]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:34 compute-0 sudo[178812]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:35 compute-0 sudo[178974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdssgjpntbmfdawmxigogjfitztdjexw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395814.926306-834-138528084489300/AnsiballZ_file.py'
Dec 10 19:43:35 compute-0 sudo[178974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:35 compute-0 podman[178938]: 2025-12-10 19:43:35.25976524 +0000 UTC m=+0.109418627 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 10 19:43:35 compute-0 python3.9[178984]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:35 compute-0 sudo[178974]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:35 compute-0 sudo[179142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahaahazpitnmpjtilhinructmzurrepy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395815.5324473-834-81677906402134/AnsiballZ_file.py'
Dec 10 19:43:35 compute-0 sudo[179142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:35 compute-0 python3.9[179144]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:35 compute-0 sudo[179142]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:36 compute-0 sudo[179294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmhdhaaerekfxarywtvofssalorjdavy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395816.1095736-834-12432551339695/AnsiballZ_file.py'
Dec 10 19:43:36 compute-0 sudo[179294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:36 compute-0 python3.9[179296]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:36 compute-0 sudo[179294]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:37 compute-0 sudo[179446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrjwimvvoylmrchjdzkmkyzrnujenglo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395816.7420382-834-23431628858882/AnsiballZ_file.py'
Dec 10 19:43:37 compute-0 sudo[179446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:37 compute-0 python3.9[179448]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:37 compute-0 sudo[179446]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:37 compute-0 sudo[179598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhqdojhznxmzhwzozcbdskdsohvoirok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395817.336952-834-200638168888509/AnsiballZ_file.py'
Dec 10 19:43:37 compute-0 sudo[179598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:37 compute-0 python3.9[179600]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:43:37 compute-0 sudo[179598]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:38 compute-0 sudo[179750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzqsovzxvlwclkmtigyrydeyueepyvwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395818.0403235-892-102554603001809/AnsiballZ_command.py'
Dec 10 19:43:38 compute-0 sudo[179750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:38 compute-0 python3.9[179752]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:43:38 compute-0 sudo[179750]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:39 compute-0 python3.9[179904]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 10 19:43:40 compute-0 sudo[180054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zshcooelgwoabansjppehxqkbvwcupyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395819.7045608-910-244422412464340/AnsiballZ_systemd_service.py'
Dec 10 19:43:40 compute-0 sudo[180054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:40 compute-0 python3.9[180056]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:43:40 compute-0 systemd[1]: Reloading.
Dec 10 19:43:40 compute-0 systemd-rc-local-generator[180083]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:43:40 compute-0 systemd-sysv-generator[180087]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:43:40 compute-0 sudo[180054]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:41 compute-0 sudo[180241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwtarkphkmrhzrufqptdhidurwjvtuch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395820.859504-918-185430745766647/AnsiballZ_command.py'
Dec 10 19:43:41 compute-0 sudo[180241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:41 compute-0 python3.9[180243]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:43:41 compute-0 sudo[180241]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:41 compute-0 sudo[180394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdpqwyvmyysmldulwsqrygdcgkrvnfui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395821.4521189-918-246474891774229/AnsiballZ_command.py'
Dec 10 19:43:41 compute-0 sudo[180394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:41 compute-0 python3.9[180396]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:43:41 compute-0 sudo[180394]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:42 compute-0 sudo[180547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wshcrjdsqvrivdmniaiplvwcdebppkki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395822.0460649-918-83620184661650/AnsiballZ_command.py'
Dec 10 19:43:42 compute-0 sudo[180547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:42 compute-0 python3.9[180549]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:43:42 compute-0 sudo[180547]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:43 compute-0 sudo[180700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efvensxyafpoktsrluxetzyuzoaobzsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395822.7074633-918-260479295627369/AnsiballZ_command.py'
Dec 10 19:43:43 compute-0 sudo[180700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:43 compute-0 python3.9[180702]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:43:43 compute-0 sudo[180700]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:43 compute-0 sudo[180853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijxsaqhuteabdkhegqkornritdheluit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395823.401202-918-182841072468114/AnsiballZ_command.py'
Dec 10 19:43:43 compute-0 sudo[180853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:43 compute-0 python3.9[180855]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:43:43 compute-0 sudo[180853]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:44 compute-0 sudo[181006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzijmhelwpkyjphujosbvgkevkliffoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395824.0186763-918-62380928717397/AnsiballZ_command.py'
Dec 10 19:43:44 compute-0 sudo[181006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:44 compute-0 python3.9[181008]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:43:44 compute-0 sudo[181006]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:44 compute-0 sudo[181159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pccmingdzcmmgepqtxentbdqtwxqlzpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395824.604065-918-16861684923174/AnsiballZ_command.py'
Dec 10 19:43:44 compute-0 sudo[181159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:45 compute-0 python3.9[181161]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:43:45 compute-0 sudo[181159]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:45 compute-0 sudo[181312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kntmyjrfxgxbroogwmsdvvsjbaxauunr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395825.2188845-918-187359623068878/AnsiballZ_command.py'
Dec 10 19:43:45 compute-0 sudo[181312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:45 compute-0 python3.9[181314]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:43:46 compute-0 sudo[181312]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:48 compute-0 sudo[181465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nigafsahyqmwhunpphkedqiscscfnjnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395827.7049167-997-51877297441800/AnsiballZ_file.py'
Dec 10 19:43:48 compute-0 sudo[181465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:48 compute-0 python3.9[181467]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:43:48 compute-0 sudo[181465]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:48 compute-0 sudo[181617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcgpxmbhwlzgrtqbdvuraiugbukcybza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395828.4635527-997-51861820052047/AnsiballZ_file.py'
Dec 10 19:43:48 compute-0 sudo[181617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:48 compute-0 python3.9[181619]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:43:48 compute-0 sudo[181617]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:49 compute-0 sudo[181769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laatuhwvrgospinimwlyhqixabbyllqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395829.100595-997-186788619130232/AnsiballZ_file.py'
Dec 10 19:43:49 compute-0 sudo[181769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:49 compute-0 python3.9[181771]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:43:49 compute-0 sudo[181769]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:50 compute-0 sudo[181921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrzshgyubtzoyrnukoubytpnzrsxqyve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395829.7888987-1019-114523374312910/AnsiballZ_file.py'
Dec 10 19:43:50 compute-0 sudo[181921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:50 compute-0 python3.9[181923]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:43:50 compute-0 sudo[181921]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:50 compute-0 sudo[182073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-simtmjqvcmuwtppxdkebjvfzbvjvxhwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395830.5233593-1019-23438058392294/AnsiballZ_file.py'
Dec 10 19:43:50 compute-0 sudo[182073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:51 compute-0 python3.9[182075]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:43:51 compute-0 sudo[182073]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:51 compute-0 sudo[182225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qupmualxbwzlfacismnrvqrvqwxpqkep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395831.2693233-1019-11065981433830/AnsiballZ_file.py'
Dec 10 19:43:51 compute-0 sudo[182225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:51 compute-0 python3.9[182227]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:43:51 compute-0 sudo[182225]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:52 compute-0 sudo[182377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcenqzxsqvgasfodegpcrgetlyocvxmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395831.9204013-1019-239433168673339/AnsiballZ_file.py'
Dec 10 19:43:52 compute-0 sudo[182377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:52 compute-0 python3.9[182379]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:43:52 compute-0 sudo[182377]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:52 compute-0 sudo[182529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfldhuwceihrqdcipcxirhgrpdutifyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395832.6415842-1019-56367051242088/AnsiballZ_file.py'
Dec 10 19:43:52 compute-0 sudo[182529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:53 compute-0 python3.9[182531]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:43:53 compute-0 sudo[182529]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:53 compute-0 sudo[182681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffltmubmjxdnjrqyxldueukenejtfycx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395833.3146393-1019-214857864421901/AnsiballZ_file.py'
Dec 10 19:43:53 compute-0 sudo[182681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:53 compute-0 python3.9[182683]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:43:53 compute-0 sudo[182681]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:54 compute-0 sudo[182833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkffkigehjftbkguqqeckkfuaopfeesa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395833.965554-1019-273426043922606/AnsiballZ_file.py'
Dec 10 19:43:54 compute-0 sudo[182833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:54 compute-0 python3.9[182835]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:43:54 compute-0 sudo[182833]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:57 compute-0 podman[182860]: 2025-12-10 19:43:57.108264287 +0000 UTC m=+0.081620160 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 10 19:43:58 compute-0 sudo[183004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnqqoqcfohiywaqfjtqrdzhkklcrwbyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395838.3502724-1188-18918822339722/AnsiballZ_getent.py'
Dec 10 19:43:58 compute-0 sudo[183004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:58 compute-0 python3.9[183006]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 10 19:43:58 compute-0 sudo[183004]: pam_unix(sudo:session): session closed for user root
Dec 10 19:43:59 compute-0 sudo[183157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjxjpepldidyiksuzvkirqkrhlddnwnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395839.0856297-1196-30464874503862/AnsiballZ_group.py'
Dec 10 19:43:59 compute-0 sudo[183157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:43:59 compute-0 python3.9[183159]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 10 19:43:59 compute-0 groupadd[183160]: group added to /etc/group: name=nova, GID=42436
Dec 10 19:43:59 compute-0 groupadd[183160]: group added to /etc/gshadow: name=nova
Dec 10 19:43:59 compute-0 groupadd[183160]: new group: name=nova, GID=42436
Dec 10 19:43:59 compute-0 sudo[183157]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:00 compute-0 sudo[183315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goynljexjhwcyrvhmmjclpasfpujrzqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395840.1261766-1204-113331078725289/AnsiballZ_user.py'
Dec 10 19:44:00 compute-0 sudo[183315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:00 compute-0 python3.9[183317]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 10 19:44:00 compute-0 useradd[183319]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 10 19:44:00 compute-0 useradd[183319]: add 'nova' to group 'libvirt'
Dec 10 19:44:00 compute-0 useradd[183319]: add 'nova' to shadow group 'libvirt'
Dec 10 19:44:01 compute-0 sudo[183315]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:01 compute-0 sshd-session[183350]: Accepted publickey for zuul from 192.168.122.30 port 43886 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:44:01 compute-0 systemd-logind[789]: New session 25 of user zuul.
Dec 10 19:44:02 compute-0 systemd[1]: Started Session 25 of User zuul.
Dec 10 19:44:02 compute-0 sshd-session[183350]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:44:02 compute-0 sshd-session[183353]: Received disconnect from 192.168.122.30 port 43886:11: disconnected by user
Dec 10 19:44:02 compute-0 sshd-session[183353]: Disconnected from user zuul 192.168.122.30 port 43886
Dec 10 19:44:02 compute-0 sshd-session[183350]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:44:02 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Dec 10 19:44:02 compute-0 systemd-logind[789]: Session 25 logged out. Waiting for processes to exit.
Dec 10 19:44:02 compute-0 systemd-logind[789]: Removed session 25.
Dec 10 19:44:02 compute-0 python3.9[183503]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:44:03 compute-0 python3.9[183624]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395842.337222-1229-174640604063049/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:44:03 compute-0 podman[183625]: 2025-12-10 19:44:03.404695484 +0000 UTC m=+0.067350336 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 10 19:44:03 compute-0 python3.9[183794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:44:04 compute-0 python3.9[183870]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:44:05 compute-0 python3.9[184020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:44:05 compute-0 podman[184115]: 2025-12-10 19:44:05.605020116 +0000 UTC m=+0.077972262 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 10 19:44:05 compute-0 python3.9[184154]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395844.8357244-1229-72211687330252/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:44:06 compute-0 python3.9[184317]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:44:06 compute-0 python3.9[184438]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395845.8964307-1229-251971502901905/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:44:07 compute-0 python3.9[184588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:44:07 compute-0 python3.9[184709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395846.9577246-1229-75441939809968/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:44:08 compute-0 python3.9[184859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:44:08 compute-0 python3.9[184980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395848.0419872-1229-28605718160372/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:44:09 compute-0 sudo[185130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpxtwztlfrztmyjmzkhlurwtzixchtbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395849.140954-1312-24286985963462/AnsiballZ_file.py'
Dec 10 19:44:09 compute-0 sudo[185130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:09 compute-0 python3.9[185132]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:44:09 compute-0 sudo[185130]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:10 compute-0 sudo[185282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcgbvyfwpfossonzyphojfntnrrkzzma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395849.9331264-1320-169186467740551/AnsiballZ_copy.py'
Dec 10 19:44:10 compute-0 sudo[185282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:10 compute-0 python3.9[185284]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:44:10 compute-0 sudo[185282]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:10 compute-0 sudo[185434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfhrfhcptzvebslsfjljhohjyjoooiwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395850.6252608-1328-21914752033676/AnsiballZ_stat.py'
Dec 10 19:44:10 compute-0 sudo[185434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:11 compute-0 python3.9[185436]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:44:11 compute-0 sudo[185434]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:11 compute-0 sudo[185586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqhrriibpmxemrhipwhpmiymkclpgcox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395851.351916-1336-274710729438628/AnsiballZ_stat.py'
Dec 10 19:44:11 compute-0 sudo[185586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:11 compute-0 python3.9[185588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:44:11 compute-0 sudo[185586]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:12 compute-0 sudo[185709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcqqqcrdwmjmfjltlftghslnrntcqqzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395851.351916-1336-274710729438628/AnsiballZ_copy.py'
Dec 10 19:44:12 compute-0 sudo[185709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:12 compute-0 python3.9[185711]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765395851.351916-1336-274710729438628/.source _original_basename=.k7d4vkxx follow=False checksum=cf9de43eba6657f3b53034afa976cffb8e316342 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 10 19:44:12 compute-0 sudo[185709]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:13 compute-0 python3.9[185863]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:44:13 compute-0 python3.9[186015]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:44:14 compute-0 python3.9[186136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395853.3357115-1362-244799444012906/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:44:15 compute-0 python3.9[186286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:44:15 compute-0 python3.9[186407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395854.4367337-1377-121369254652726/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:44:16 compute-0 sudo[186557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwuqlfalrohjgfsbkfnwaypuecvksquz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395855.8741326-1394-256405025449423/AnsiballZ_container_config_data.py'
Dec 10 19:44:16 compute-0 sudo[186557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:16 compute-0 python3.9[186559]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 10 19:44:16 compute-0 sudo[186557]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:17 compute-0 sudo[186709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpstxfhzsihyhaurnkfqxukcepsgdhpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395856.6493707-1403-2705843328979/AnsiballZ_container_config_hash.py'
Dec 10 19:44:17 compute-0 sudo[186709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:17 compute-0 python3.9[186711]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 10 19:44:17 compute-0 sudo[186709]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:17 compute-0 sudo[186861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivdxubynfoyntfgkzwqdncggwxvnstdy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765395857.5232174-1413-273374116057336/AnsiballZ_edpm_container_manage.py'
Dec 10 19:44:17 compute-0 sudo[186861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:18 compute-0 python3[186863]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 10 19:44:18 compute-0 podman[186901]: 2025-12-10 19:44:18.413801811 +0000 UTC m=+0.071676064 container create 6bb2ce7ac9bb25098a049b3550cabc6983eed6442d6b544ebc9cd011635647bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 10 19:44:18 compute-0 podman[186901]: 2025-12-10 19:44:18.384609924 +0000 UTC m=+0.042484157 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 10 19:44:18 compute-0 python3[186863]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 10 19:44:18 compute-0 sudo[186861]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:19 compute-0 sudo[187089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeohezqobuxjrmbshrlbqdqwymlsznzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395858.8136017-1421-105468381713051/AnsiballZ_stat.py'
Dec 10 19:44:19 compute-0 sudo[187089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:19 compute-0 python3.9[187091]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:44:19 compute-0 sudo[187089]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:20 compute-0 sudo[187243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdtulkhyhvcwbacdmuwyqjffmsxbxibn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395859.8968503-1433-4488128754339/AnsiballZ_container_config_data.py'
Dec 10 19:44:20 compute-0 sudo[187243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:20 compute-0 python3.9[187245]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 10 19:44:20 compute-0 sudo[187243]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:20 compute-0 sudo[187395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwjbevszonwkksfaloyicsdqvnuvihae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395860.6175883-1442-236300509887825/AnsiballZ_container_config_hash.py'
Dec 10 19:44:20 compute-0 sudo[187395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:21 compute-0 python3.9[187397]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 10 19:44:21 compute-0 sudo[187395]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:21 compute-0 sudo[187547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bktizrkilsefvfywpjdvgsrbryzwstzx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765395861.3578913-1452-113016701361900/AnsiballZ_edpm_container_manage.py'
Dec 10 19:44:21 compute-0 sudo[187547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:21 compute-0 python3[187549]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 10 19:44:22 compute-0 podman[187586]: 2025-12-10 19:44:22.067361929 +0000 UTC m=+0.053202155 container create db08fb50611798e19b114868fa498f72abc76646ac04b0303378498aba6fe786 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20251202)
Dec 10 19:44:22 compute-0 podman[187586]: 2025-12-10 19:44:22.0380976 +0000 UTC m=+0.023937846 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 10 19:44:22 compute-0 python3[187549]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec 10 19:44:22 compute-0 sudo[187547]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:22 compute-0 sudo[187774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pldqedpaxrqiaqfhdeeimjyjtuhufsfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395862.3877006-1460-152688594718328/AnsiballZ_stat.py'
Dec 10 19:44:22 compute-0 sudo[187774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:22 compute-0 python3.9[187776]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:44:22 compute-0 sudo[187774]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:44:23.353 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:44:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:44:23.353 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:44:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:44:23.353 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:44:23 compute-0 sudo[187928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nugtearphppgzhlzxgugpvjkhnxdrprk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395863.1655207-1469-277989401699207/AnsiballZ_file.py'
Dec 10 19:44:23 compute-0 sudo[187928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:23 compute-0 python3.9[187930]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:44:23 compute-0 sudo[187928]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:24 compute-0 sudo[188079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxoygbjefrzrinqrlquwnkuphcxwoenq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395863.847553-1469-262670352159087/AnsiballZ_copy.py'
Dec 10 19:44:24 compute-0 sudo[188079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:24 compute-0 python3.9[188081]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765395863.847553-1469-262670352159087/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:44:24 compute-0 sudo[188079]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:24 compute-0 sudo[188155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otuohoiselfxhoymamcmfgrfsybekajj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395863.847553-1469-262670352159087/AnsiballZ_systemd.py'
Dec 10 19:44:24 compute-0 sudo[188155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:25 compute-0 python3.9[188157]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:44:25 compute-0 systemd[1]: Reloading.
Dec 10 19:44:25 compute-0 systemd-rc-local-generator[188183]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:44:25 compute-0 systemd-sysv-generator[188186]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:44:25 compute-0 sudo[188155]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:25 compute-0 sudo[188265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdockxtmwhnegoxvrrgyjqhxabausbzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395863.847553-1469-262670352159087/AnsiballZ_systemd.py'
Dec 10 19:44:25 compute-0 sudo[188265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:25 compute-0 python3.9[188267]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:44:26 compute-0 systemd[1]: Reloading.
Dec 10 19:44:26 compute-0 systemd-sysv-generator[188300]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:44:26 compute-0 systemd-rc-local-generator[188295]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:44:26 compute-0 systemd[1]: Starting nova_compute container...
Dec 10 19:44:26 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096afaa62f653e6452231c4154f5f719b452b6b3afa6087cdf650ea547737ce8/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 10 19:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096afaa62f653e6452231c4154f5f719b452b6b3afa6087cdf650ea547737ce8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 10 19:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096afaa62f653e6452231c4154f5f719b452b6b3afa6087cdf650ea547737ce8/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 10 19:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096afaa62f653e6452231c4154f5f719b452b6b3afa6087cdf650ea547737ce8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 10 19:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096afaa62f653e6452231c4154f5f719b452b6b3afa6087cdf650ea547737ce8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 10 19:44:26 compute-0 podman[188306]: 2025-12-10 19:44:26.403188035 +0000 UTC m=+0.097267073 container init db08fb50611798e19b114868fa498f72abc76646ac04b0303378498aba6fe786 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 10 19:44:26 compute-0 podman[188306]: 2025-12-10 19:44:26.410349548 +0000 UTC m=+0.104428496 container start db08fb50611798e19b114868fa498f72abc76646ac04b0303378498aba6fe786 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3)
Dec 10 19:44:26 compute-0 podman[188306]: nova_compute
Dec 10 19:44:26 compute-0 systemd[1]: Started nova_compute container.
Dec 10 19:44:26 compute-0 nova_compute[188320]: + sudo -E kolla_set_configs
Dec 10 19:44:26 compute-0 sudo[188265]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Validating config file
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Copying service configuration files
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Deleting /etc/ceph
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Creating directory /etc/ceph
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Setting permission for /etc/ceph
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Writing out command to execute
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 10 19:44:26 compute-0 nova_compute[188320]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 10 19:44:26 compute-0 nova_compute[188320]: ++ cat /run_command
Dec 10 19:44:26 compute-0 nova_compute[188320]: + CMD=nova-compute
Dec 10 19:44:26 compute-0 nova_compute[188320]: + ARGS=
Dec 10 19:44:26 compute-0 nova_compute[188320]: + sudo kolla_copy_cacerts
Dec 10 19:44:26 compute-0 nova_compute[188320]: + [[ ! -n '' ]]
Dec 10 19:44:26 compute-0 nova_compute[188320]: + . kolla_extend_start
Dec 10 19:44:26 compute-0 nova_compute[188320]: Running command: 'nova-compute'
Dec 10 19:44:26 compute-0 nova_compute[188320]: + echo 'Running command: '\''nova-compute'\'''
Dec 10 19:44:26 compute-0 nova_compute[188320]: + umask 0022
Dec 10 19:44:26 compute-0 nova_compute[188320]: + exec nova-compute
Dec 10 19:44:27 compute-0 podman[188455]: 2025-12-10 19:44:27.349466829 +0000 UTC m=+0.088247929 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 10 19:44:27 compute-0 python3.9[188489]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:44:28 compute-0 python3.9[188650]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:44:28 compute-0 nova_compute[188320]: 2025-12-10 19:44:28.607 188324 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 10 19:44:28 compute-0 nova_compute[188320]: 2025-12-10 19:44:28.608 188324 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 10 19:44:28 compute-0 nova_compute[188320]: 2025-12-10 19:44:28.608 188324 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 10 19:44:28 compute-0 nova_compute[188320]: 2025-12-10 19:44:28.608 188324 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 10 19:44:28 compute-0 nova_compute[188320]: 2025-12-10 19:44:28.770 188324 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:44:28 compute-0 nova_compute[188320]: 2025-12-10 19:44:28.801 188324 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:44:28 compute-0 nova_compute[188320]: 2025-12-10 19:44:28.801 188324 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 10 19:44:29 compute-0 python3.9[188804]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.520 188324 INFO nova.virt.driver [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.644 188324 INFO nova.compute.provider_config [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.661 188324 DEBUG oslo_concurrency.lockutils [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.662 188324 DEBUG oslo_concurrency.lockutils [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.662 188324 DEBUG oslo_concurrency.lockutils [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.663 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.663 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.663 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.663 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.663 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.664 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.664 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.664 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.664 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.664 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.665 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.665 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.665 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.665 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.665 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.666 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.666 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.666 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.666 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.666 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.666 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.667 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.667 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.667 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.667 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.667 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.668 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.668 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.668 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.668 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.669 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.669 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.669 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.669 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.669 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.670 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.670 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.670 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.670 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.670 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.671 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.671 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.671 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.671 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.671 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.672 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.672 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.672 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.672 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.672 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.673 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.673 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.673 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.673 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.673 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.674 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.674 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.674 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.674 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.674 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.675 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.675 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.675 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.675 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.675 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.676 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.676 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.676 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.676 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.676 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.676 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.677 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.677 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.677 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.677 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.677 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.678 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.678 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.678 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.678 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.678 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.678 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.678 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.679 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.679 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.679 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.679 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.679 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.679 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.679 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.680 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.680 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.680 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.680 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.680 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.680 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.680 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.681 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.681 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.681 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.681 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.681 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.681 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.682 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.682 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.682 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.682 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.682 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.683 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.683 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.683 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.683 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.683 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.683 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.684 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.684 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.684 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.684 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.684 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.685 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.685 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.685 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.685 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.685 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.685 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.686 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.686 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.686 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.686 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.686 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.687 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.687 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.687 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.687 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.687 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.688 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.688 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.688 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.688 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.688 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.688 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.689 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.689 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.689 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.689 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.689 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.689 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.689 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.690 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.690 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.690 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.690 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.690 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.690 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.691 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.691 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.691 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.691 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.691 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.691 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.691 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.692 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.692 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.692 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.692 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.692 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.692 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.693 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.693 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.693 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.693 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.693 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.693 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.693 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.694 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.694 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.694 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.694 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.694 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.694 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.695 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.695 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.695 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.695 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.695 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.695 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.696 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.696 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.696 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.696 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.696 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.696 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.696 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.697 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.697 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.697 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.697 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.697 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.697 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.697 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.698 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.698 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.698 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.698 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.698 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.698 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.699 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.699 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.699 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.699 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.699 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.699 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.699 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.700 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.700 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.700 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.700 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.700 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.700 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.700 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.701 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.701 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.701 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.701 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.701 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.701 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.701 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.702 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.702 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.702 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.702 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.702 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.702 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.702 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.703 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.703 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.703 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.703 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.703 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.703 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.704 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.704 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.704 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.704 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.704 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.704 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.704 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.705 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.705 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.705 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.705 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.705 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.705 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.705 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.706 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.706 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.706 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.706 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.706 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.706 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.707 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.707 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.707 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.707 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.707 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.707 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.707 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.708 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.708 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.708 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.708 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.708 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.708 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.709 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.709 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.709 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.709 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.709 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.709 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.709 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.710 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.710 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.710 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.710 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.710 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.710 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.710 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.711 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.711 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.711 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.711 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.711 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.711 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.712 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.712 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.712 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.712 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.712 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.713 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.713 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.713 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.713 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.713 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.713 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.713 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.713 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.714 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.714 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.714 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.714 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.714 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.714 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.715 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.715 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.715 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.715 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.715 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.715 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.715 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.716 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.716 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.716 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.716 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.716 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.716 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.716 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.717 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.717 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.717 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.717 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.717 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.717 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.717 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.718 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.718 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.718 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.718 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.718 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.718 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.719 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.719 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.719 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.719 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.719 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.719 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.719 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.720 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.720 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.720 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.720 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.720 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.720 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.720 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.721 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.721 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.721 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.721 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.721 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.722 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.722 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.722 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.722 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.722 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.722 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.722 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.723 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.723 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.723 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.723 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.723 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.723 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.723 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.724 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.724 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.724 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.724 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.724 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.724 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.725 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.725 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.725 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.725 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.725 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.725 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.725 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.725 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.726 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.726 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.726 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.726 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.726 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.726 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.726 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.727 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.727 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.727 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.727 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.727 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.728 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.728 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.728 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.728 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.728 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.728 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.728 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.729 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.729 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.729 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.729 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.729 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.729 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.729 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.729 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.730 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.730 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.730 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.730 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.730 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.730 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.730 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.731 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.731 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.731 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.731 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.731 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.731 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.732 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.732 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.732 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.732 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.732 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.732 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.733 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.733 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.733 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.733 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.733 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.733 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.733 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.733 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.734 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.734 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.734 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.734 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.734 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.734 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.735 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.735 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.735 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.735 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.735 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.735 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.736 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.736 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.736 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.736 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.736 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.736 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.737 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.737 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.737 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.737 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.737 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.737 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.738 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.738 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.738 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.738 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.738 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.738 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.739 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.739 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.739 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.739 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.739 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.739 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.739 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.740 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.740 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.740 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.740 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.740 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.740 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.741 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.741 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.741 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.741 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.741 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.741 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.741 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.742 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.742 188324 WARNING oslo_config.cfg [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 10 19:44:29 compute-0 nova_compute[188320]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 10 19:44:29 compute-0 nova_compute[188320]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 10 19:44:29 compute-0 nova_compute[188320]: and ``live_migration_inbound_addr`` respectively.
Dec 10 19:44:29 compute-0 nova_compute[188320]: ).  Its value may be silently ignored in the future.
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.742 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.742 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.742 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.743 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.743 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.743 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.743 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.743 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.743 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.743 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.744 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.744 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.744 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.744 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.744 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.744 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.745 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.745 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.745 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.745 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.745 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.745 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.745 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.746 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.746 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.746 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.746 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.746 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.746 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.746 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.747 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.747 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.747 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.747 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.747 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.747 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.748 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.748 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.748 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.748 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.748 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.748 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.748 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.749 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.749 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.749 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.749 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.749 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.749 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.749 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.750 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.750 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.750 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.750 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.750 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.750 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.750 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.751 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.751 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.751 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.751 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.751 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.752 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.752 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.752 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.752 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.752 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.752 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.752 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.752 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.753 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.753 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.753 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.753 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.753 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.753 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.753 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.754 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.754 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.754 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.754 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.754 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.754 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.755 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.755 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.755 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.755 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.755 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.755 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.755 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.756 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.756 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.756 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.756 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.756 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.756 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.756 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.757 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.757 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.757 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.757 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.757 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.757 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.757 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.758 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.758 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.758 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.758 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.758 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.758 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.758 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.759 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.759 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.759 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.759 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.759 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.759 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.759 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.760 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.760 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.760 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.760 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.760 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.760 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.760 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.761 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.761 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.761 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.761 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.761 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.761 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.762 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.762 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.762 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.762 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.762 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.762 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.762 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.763 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.763 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.763 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.763 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.763 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.763 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.764 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.764 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.764 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.764 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.764 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.764 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.765 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.765 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.765 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.765 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.765 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.765 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.765 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.765 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.766 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.766 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.766 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.766 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.766 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.766 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.767 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.767 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.767 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.767 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.767 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.767 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.767 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.768 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.768 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.768 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.768 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.768 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.768 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.768 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.769 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.769 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.769 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.769 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.769 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.769 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.770 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.770 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.770 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.770 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.770 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.770 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.770 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.771 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.771 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.771 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.771 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.771 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.771 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.772 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.772 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.772 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.772 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.772 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.772 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.773 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.773 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.773 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.773 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.773 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.773 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.773 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.774 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.774 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.774 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.774 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.774 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.774 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.774 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.775 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.775 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.775 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.775 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.775 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.775 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.775 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.776 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.776 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.776 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.776 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.776 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.776 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.777 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.777 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.777 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.777 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.777 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.777 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.777 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.777 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.778 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.778 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.778 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.778 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.778 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.778 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.778 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.779 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.779 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.779 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.779 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.779 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.780 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.780 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.780 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.780 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.780 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.780 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.780 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.781 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.781 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.781 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.781 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.781 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.781 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.781 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.782 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.782 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.782 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.782 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.782 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.782 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.782 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.783 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.783 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.783 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.783 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.783 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.783 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.783 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.783 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.784 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.784 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.784 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.784 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.784 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.784 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.784 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.785 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.785 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.785 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.785 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.785 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.785 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.785 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.786 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.786 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.786 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.786 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.786 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.786 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.786 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.787 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.787 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.787 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.787 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.787 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.787 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.788 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.788 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.788 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.788 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.788 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.788 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.789 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.789 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.789 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.789 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.789 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.789 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.789 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.789 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.790 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.790 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.790 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.790 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.790 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.790 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.791 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.791 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.791 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.791 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.791 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.791 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.791 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.792 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.792 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.792 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.792 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.792 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.792 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.792 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.793 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.793 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.793 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.793 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.793 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.793 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.793 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.794 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.794 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.794 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.794 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.794 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.794 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.795 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.795 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.795 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.795 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.795 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.795 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.795 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.796 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.796 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.796 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.796 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.796 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.796 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.796 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.797 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.797 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.797 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.797 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.797 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.797 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.797 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.798 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.798 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.798 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.798 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.798 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.798 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.799 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.799 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.799 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.799 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.799 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.799 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.799 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.800 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.800 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.800 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.800 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.800 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.800 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.800 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.801 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.801 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.801 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.801 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.801 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.801 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.801 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.802 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.802 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.802 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.802 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.802 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.802 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.803 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.803 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.803 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.803 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.803 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.803 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.803 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.804 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.804 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.804 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.804 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.804 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.804 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.804 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.805 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.805 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.805 188324 DEBUG oslo_service.service [None req-2f7d673c-7efe-4beb-b82f-9468807a4f7c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.806 188324 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.825 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.826 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.826 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.826 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 10 19:44:29 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 10 19:44:29 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.900 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f36d1fb13a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.903 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f36d1fb13a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.905 188324 INFO nova.virt.libvirt.driver [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Connection event '1' reason 'None'
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.939 188324 WARNING nova.virt.libvirt.driver [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 10 19:44:29 compute-0 nova_compute[188320]: 2025-12-10 19:44:29.940 188324 DEBUG nova.virt.libvirt.volume.mount [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 10 19:44:30 compute-0 sudo[189006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdiczwslmklrtdjcktztxumrfkktphze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395869.6161063-1529-237214688355656/AnsiballZ_podman_container.py'
Dec 10 19:44:30 compute-0 sudo[189006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:30 compute-0 python3.9[189008]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 10 19:44:30 compute-0 sudo[189006]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:30 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 19:44:30 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 19:44:30 compute-0 nova_compute[188320]: 2025-12-10 19:44:30.817 188324 INFO nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Libvirt host capabilities <capabilities>
Dec 10 19:44:30 compute-0 nova_compute[188320]: 
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <host>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <uuid>be15d2aa-13e4-4b81-819a-074ddfd2ac46</uuid>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <cpu>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <arch>x86_64</arch>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model>EPYC-Rome-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <vendor>AMD</vendor>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <microcode version='16777317'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <signature family='23' model='49' stepping='0'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='x2apic'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='tsc-deadline'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='osxsave'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='hypervisor'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='tsc_adjust'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='spec-ctrl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='stibp'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='arch-capabilities'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='ssbd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='cmp_legacy'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='topoext'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='virt-ssbd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='lbrv'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='tsc-scale'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='vmcb-clean'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='pause-filter'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='pfthreshold'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='svme-addr-chk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='rdctl-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='skip-l1dfl-vmentry'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='mds-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature name='pschange-mc-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <pages unit='KiB' size='4'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <pages unit='KiB' size='2048'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <pages unit='KiB' size='1048576'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </cpu>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <power_management>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <suspend_mem/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <suspend_disk/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <suspend_hybrid/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </power_management>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <iommu support='no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <migration_features>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <live/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <uri_transports>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <uri_transport>tcp</uri_transport>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <uri_transport>rdma</uri_transport>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </uri_transports>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </migration_features>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <topology>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <cells num='1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <cell id='0'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:           <memory unit='KiB'>7864308</memory>
Dec 10 19:44:30 compute-0 nova_compute[188320]:           <pages unit='KiB' size='4'>1966077</pages>
Dec 10 19:44:30 compute-0 nova_compute[188320]:           <pages unit='KiB' size='2048'>0</pages>
Dec 10 19:44:30 compute-0 nova_compute[188320]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 10 19:44:30 compute-0 nova_compute[188320]:           <distances>
Dec 10 19:44:30 compute-0 nova_compute[188320]:             <sibling id='0' value='10'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:           </distances>
Dec 10 19:44:30 compute-0 nova_compute[188320]:           <cpus num='8'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:           </cpus>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         </cell>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </cells>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </topology>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <cache>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </cache>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <secmodel>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model>selinux</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <doi>0</doi>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </secmodel>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <secmodel>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model>dac</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <doi>0</doi>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </secmodel>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   </host>
Dec 10 19:44:30 compute-0 nova_compute[188320]: 
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <guest>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <os_type>hvm</os_type>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <arch name='i686'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <wordsize>32</wordsize>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <domain type='qemu'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <domain type='kvm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </arch>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <features>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <pae/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <nonpae/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <acpi default='on' toggle='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <apic default='on' toggle='no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <cpuselection/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <deviceboot/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <disksnapshot default='on' toggle='no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <externalSnapshot/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </features>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   </guest>
Dec 10 19:44:30 compute-0 nova_compute[188320]: 
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <guest>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <os_type>hvm</os_type>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <arch name='x86_64'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <wordsize>64</wordsize>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <domain type='qemu'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <domain type='kvm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </arch>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <features>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <acpi default='on' toggle='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <apic default='on' toggle='no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <cpuselection/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <deviceboot/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <disksnapshot default='on' toggle='no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <externalSnapshot/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </features>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   </guest>
Dec 10 19:44:30 compute-0 nova_compute[188320]: 
Dec 10 19:44:30 compute-0 nova_compute[188320]: </capabilities>
Dec 10 19:44:30 compute-0 nova_compute[188320]: 
Dec 10 19:44:30 compute-0 nova_compute[188320]: 2025-12-10 19:44:30.825 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 10 19:44:30 compute-0 nova_compute[188320]: 2025-12-10 19:44:30.856 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 10 19:44:30 compute-0 nova_compute[188320]: <domainCapabilities>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <path>/usr/libexec/qemu-kvm</path>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <domain>kvm</domain>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <arch>i686</arch>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <vcpu max='240'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <iothreads supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <os supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <enum name='firmware'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <loader supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>rom</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>pflash</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='readonly'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>yes</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>no</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='secure'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>no</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </loader>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   </os>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <cpu>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <mode name='host-passthrough' supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='hostPassthroughMigratable'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>on</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>off</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <mode name='maximum' supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='maximumMigratable'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>on</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>off</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <mode name='host-model' supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <vendor>AMD</vendor>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='x2apic'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='tsc-deadline'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='hypervisor'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='tsc_adjust'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='spec-ctrl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='stibp'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='ssbd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='cmp_legacy'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='overflow-recov'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='succor'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='ibrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='amd-ssbd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='virt-ssbd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='lbrv'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='tsc-scale'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='vmcb-clean'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='flushbyasid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='pause-filter'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='pfthreshold'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='svme-addr-chk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='disable' name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <mode name='custom' supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell-noTSX'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v5'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cooperlake'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cooperlake-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cooperlake-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Denverton'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Denverton-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Denverton-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Denverton-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Dhyana-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Genoa'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amd-psfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='auto-ibrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='stibp-always-on'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Genoa-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amd-psfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='auto-ibrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='stibp-always-on'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Milan'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Milan-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Milan-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amd-psfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='stibp-always-on'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='GraniteRapids'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='prefetchiti'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='GraniteRapids-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='prefetchiti'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='GraniteRapids-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx10'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx10-128'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx10-256'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx10-512'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='prefetchiti'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell-noTSX'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-noTSX'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v5'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v6'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v7'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='IvyBridge'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='IvyBridge-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='IvyBridge-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='IvyBridge-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='KnightsMill'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512er'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512pf'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='KnightsMill-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512er'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512pf'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Opteron_G4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Opteron_G4-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Opteron_G5'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tbm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Opteron_G5-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tbm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='SierraForest'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cmpccxadd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='SierraForest-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cmpccxadd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v5'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Snowridge'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='athlon'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='athlon-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='core2duo'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='core2duo-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='coreduo'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='coreduo-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='n270'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='n270-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='phenom'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='phenom-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   </cpu>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <memoryBacking supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <enum name='sourceType'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <value>file</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <value>anonymous</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <value>memfd</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   </memoryBacking>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <devices>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <disk supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='diskDevice'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>disk</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>cdrom</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>floppy</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>lun</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='bus'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>ide</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>fdc</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>scsi</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>usb</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>sata</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio-transitional</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio-non-transitional</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </disk>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <graphics supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>vnc</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>egl-headless</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>dbus</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </graphics>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <video supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='modelType'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>vga</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>cirrus</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>none</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>bochs</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>ramfb</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </video>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <hostdev supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='mode'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>subsystem</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='startupPolicy'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>default</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>mandatory</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>requisite</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>optional</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='subsysType'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>usb</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>pci</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>scsi</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='capsType'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='pciBackend'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </hostdev>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <rng supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio-transitional</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio-non-transitional</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='backendModel'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>random</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>egd</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>builtin</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </rng>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <filesystem supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='driverType'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>path</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>handle</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtiofs</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </filesystem>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <tpm supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>tpm-tis</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>tpm-crb</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='backendModel'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>emulator</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>external</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='backendVersion'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>2.0</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </tpm>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <redirdev supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='bus'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>usb</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </redirdev>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <channel supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>pty</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>unix</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </channel>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <crypto supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='model'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>qemu</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='backendModel'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>builtin</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </crypto>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <interface supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='backendType'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>default</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>passt</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </interface>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <panic supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>isa</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>hyperv</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </panic>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <console supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>null</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>vc</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>pty</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>dev</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>file</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>pipe</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>stdio</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>udp</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>tcp</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>unix</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>qemu-vdagent</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>dbus</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </console>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   </devices>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <features>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <gic supported='no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <vmcoreinfo supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <genid supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <backingStoreInput supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <backup supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <async-teardown supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <ps2 supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <sev supported='no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <sgx supported='no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <hyperv supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='features'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>relaxed</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>vapic</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>spinlocks</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>vpindex</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>runtime</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>synic</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>stimer</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>reset</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>vendor_id</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>frequencies</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>reenlightenment</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>tlbflush</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>ipi</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>avic</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>emsr_bitmap</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>xmm_input</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <defaults>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <spinlocks>4095</spinlocks>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <stimer_direct>on</stimer_direct>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <tlbflush_direct>on</tlbflush_direct>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <tlbflush_extended>on</tlbflush_extended>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </defaults>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </hyperv>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <launchSecurity supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='sectype'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>tdx</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </launchSecurity>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   </features>
Dec 10 19:44:30 compute-0 nova_compute[188320]: </domainCapabilities>
Dec 10 19:44:30 compute-0 nova_compute[188320]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 10 19:44:30 compute-0 nova_compute[188320]: 2025-12-10 19:44:30.867 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 10 19:44:30 compute-0 nova_compute[188320]: <domainCapabilities>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <path>/usr/libexec/qemu-kvm</path>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <domain>kvm</domain>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <arch>i686</arch>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <vcpu max='4096'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <iothreads supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <os supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <enum name='firmware'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <loader supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>rom</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>pflash</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='readonly'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>yes</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>no</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='secure'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>no</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </loader>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   </os>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <cpu>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <mode name='host-passthrough' supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='hostPassthroughMigratable'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>on</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>off</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <mode name='maximum' supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='maximumMigratable'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>on</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>off</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <mode name='host-model' supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <vendor>AMD</vendor>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='x2apic'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='tsc-deadline'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='hypervisor'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='tsc_adjust'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='spec-ctrl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='stibp'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='ssbd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='cmp_legacy'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='overflow-recov'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='succor'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='ibrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='amd-ssbd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='virt-ssbd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='lbrv'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='tsc-scale'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='vmcb-clean'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='flushbyasid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='pause-filter'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='pfthreshold'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='svme-addr-chk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='disable' name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <mode name='custom' supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell-noTSX'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v5'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cooperlake'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cooperlake-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Cooperlake-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Denverton'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Denverton-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Denverton-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Denverton-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Dhyana-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Genoa'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amd-psfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='auto-ibrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='stibp-always-on'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Genoa-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amd-psfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='auto-ibrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='stibp-always-on'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Milan'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Milan-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Milan-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amd-psfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='stibp-always-on'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='EPYC-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='GraniteRapids'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='prefetchiti'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='GraniteRapids-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='prefetchiti'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='GraniteRapids-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx10'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx10-128'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx10-256'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx10-512'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='prefetchiti'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell-noTSX'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Haswell-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-noTSX'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v5'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v6'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v7'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='IvyBridge'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='IvyBridge-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='IvyBridge-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='IvyBridge-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='KnightsMill'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512er'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512pf'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='KnightsMill-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512er'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512pf'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Opteron_G4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Opteron_G4-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Opteron_G5'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tbm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Opteron_G5-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tbm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='SierraForest'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cmpccxadd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='SierraForest-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-ifma'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cmpccxadd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v5'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Snowridge'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v2'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v3'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v4'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='athlon'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='athlon-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='core2duo'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='core2duo-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='coreduo'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='coreduo-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='n270'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='n270-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='phenom'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <blockers model='phenom-v1'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   </cpu>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <memoryBacking supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <enum name='sourceType'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <value>file</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <value>anonymous</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <value>memfd</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   </memoryBacking>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <devices>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <disk supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='diskDevice'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>disk</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>cdrom</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>floppy</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>lun</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='bus'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>fdc</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>scsi</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>usb</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>sata</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio-transitional</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio-non-transitional</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </disk>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <graphics supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>vnc</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>egl-headless</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>dbus</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </graphics>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <video supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='modelType'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>vga</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>cirrus</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>none</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>bochs</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>ramfb</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </video>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <hostdev supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='mode'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>subsystem</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='startupPolicy'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>default</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>mandatory</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>requisite</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>optional</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='subsysType'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>usb</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>pci</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>scsi</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='capsType'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='pciBackend'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </hostdev>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <rng supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio-transitional</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtio-non-transitional</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='backendModel'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>random</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>egd</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>builtin</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </rng>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <filesystem supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='driverType'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>path</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>handle</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>virtiofs</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </filesystem>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <tpm supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>tpm-tis</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>tpm-crb</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='backendModel'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>emulator</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>external</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='backendVersion'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>2.0</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </tpm>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <redirdev supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='bus'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>usb</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </redirdev>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <channel supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>pty</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>unix</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </channel>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <crypto supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='model'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>qemu</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='backendModel'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>builtin</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </crypto>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <interface supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='backendType'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>default</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>passt</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </interface>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <panic supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>isa</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>hyperv</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </panic>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <console supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>null</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>vc</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>pty</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>dev</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>file</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>pipe</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>stdio</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>udp</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>tcp</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>unix</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>qemu-vdagent</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>dbus</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </console>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   </devices>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <features>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <gic supported='no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <vmcoreinfo supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <genid supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <backingStoreInput supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <backup supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <async-teardown supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <ps2 supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <sev supported='no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <sgx supported='no'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <hyperv supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='features'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>relaxed</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>vapic</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>spinlocks</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>vpindex</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>runtime</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>synic</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>stimer</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>reset</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>vendor_id</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>frequencies</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>reenlightenment</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>tlbflush</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>ipi</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>avic</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>emsr_bitmap</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>xmm_input</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <defaults>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <spinlocks>4095</spinlocks>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <stimer_direct>on</stimer_direct>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <tlbflush_direct>on</tlbflush_direct>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <tlbflush_extended>on</tlbflush_extended>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </defaults>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </hyperv>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <launchSecurity supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='sectype'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>tdx</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </launchSecurity>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   </features>
Dec 10 19:44:30 compute-0 nova_compute[188320]: </domainCapabilities>
Dec 10 19:44:30 compute-0 nova_compute[188320]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 10 19:44:30 compute-0 nova_compute[188320]: 2025-12-10 19:44:30.921 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 10 19:44:30 compute-0 nova_compute[188320]: 2025-12-10 19:44:30.927 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 10 19:44:30 compute-0 nova_compute[188320]: <domainCapabilities>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <path>/usr/libexec/qemu-kvm</path>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <domain>kvm</domain>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <arch>x86_64</arch>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <vcpu max='240'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <iothreads supported='yes'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <os supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <enum name='firmware'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <loader supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>rom</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>pflash</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='readonly'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>yes</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>no</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='secure'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>no</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </loader>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   </os>
Dec 10 19:44:30 compute-0 nova_compute[188320]:   <cpu>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <mode name='host-passthrough' supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='hostPassthroughMigratable'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>on</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>off</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <mode name='maximum' supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <enum name='maximumMigratable'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>on</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:         <value>off</value>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:30 compute-0 nova_compute[188320]:     <mode name='host-model' supported='yes'>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <vendor>AMD</vendor>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 10 19:44:30 compute-0 nova_compute[188320]:       <feature policy='require' name='x2apic'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='tsc-deadline'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='hypervisor'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='tsc_adjust'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='spec-ctrl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='stibp'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='ssbd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='cmp_legacy'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='overflow-recov'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='succor'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='ibrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='amd-ssbd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='virt-ssbd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='lbrv'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='tsc-scale'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='vmcb-clean'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='flushbyasid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='pause-filter'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='pfthreshold'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='svme-addr-chk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='disable' name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <mode name='custom' supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell-noTSX'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v5'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cooperlake'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cooperlake-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cooperlake-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Denverton'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Denverton-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Denverton-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Denverton-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Dhyana-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Genoa'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amd-psfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='auto-ibrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='stibp-always-on'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Genoa-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amd-psfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='auto-ibrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='stibp-always-on'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Milan'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Milan-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Milan-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amd-psfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='stibp-always-on'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='GraniteRapids'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='prefetchiti'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='GraniteRapids-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='prefetchiti'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='GraniteRapids-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx10'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx10-128'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx10-256'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx10-512'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='prefetchiti'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell-noTSX'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-noTSX'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v5'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v6'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v7'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='IvyBridge'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='IvyBridge-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='IvyBridge-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='IvyBridge-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='KnightsMill'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512er'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512pf'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='KnightsMill-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512er'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512pf'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Opteron_G4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Opteron_G4-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Opteron_G5'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tbm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Opteron_G5-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tbm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='SierraForest'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cmpccxadd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='SierraForest-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cmpccxadd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v5'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Snowridge'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='athlon'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='athlon-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='core2duo'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='core2duo-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='coreduo'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='coreduo-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='n270'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='n270-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='phenom'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='phenom-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   </cpu>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   <memoryBacking supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <enum name='sourceType'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <value>file</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <value>anonymous</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <value>memfd</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   </memoryBacking>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   <devices>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <disk supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='diskDevice'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>disk</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>cdrom</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>floppy</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>lun</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='bus'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>ide</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>fdc</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>scsi</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>usb</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>sata</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio-transitional</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio-non-transitional</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </disk>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <graphics supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>vnc</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>egl-headless</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>dbus</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </graphics>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <video supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='modelType'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>vga</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>cirrus</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>none</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>bochs</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>ramfb</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </video>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <hostdev supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='mode'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>subsystem</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='startupPolicy'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>default</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>mandatory</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>requisite</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>optional</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='subsysType'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>usb</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>pci</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>scsi</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='capsType'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='pciBackend'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </hostdev>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <rng supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio-transitional</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio-non-transitional</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='backendModel'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>random</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>egd</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>builtin</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </rng>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <filesystem supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='driverType'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>path</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>handle</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtiofs</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </filesystem>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <tpm supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>tpm-tis</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>tpm-crb</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='backendModel'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>emulator</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>external</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='backendVersion'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>2.0</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </tpm>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <redirdev supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='bus'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>usb</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </redirdev>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <channel supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>pty</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>unix</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </channel>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <crypto supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='model'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>qemu</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='backendModel'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>builtin</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </crypto>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <interface supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='backendType'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>default</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>passt</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </interface>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <panic supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>isa</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>hyperv</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </panic>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <console supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>null</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>vc</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>pty</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>dev</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>file</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>pipe</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>stdio</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>udp</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>tcp</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>unix</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>qemu-vdagent</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>dbus</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </console>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   </devices>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   <features>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <gic supported='no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <vmcoreinfo supported='yes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <genid supported='yes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <backingStoreInput supported='yes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <backup supported='yes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <async-teardown supported='yes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <ps2 supported='yes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <sev supported='no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <sgx supported='no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <hyperv supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='features'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>relaxed</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>vapic</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>spinlocks</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>vpindex</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>runtime</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>synic</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>stimer</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>reset</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>vendor_id</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>frequencies</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>reenlightenment</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>tlbflush</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>ipi</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>avic</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>emsr_bitmap</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>xmm_input</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <defaults>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <spinlocks>4095</spinlocks>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <stimer_direct>on</stimer_direct>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <tlbflush_direct>on</tlbflush_direct>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <tlbflush_extended>on</tlbflush_extended>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </defaults>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </hyperv>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <launchSecurity supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='sectype'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>tdx</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </launchSecurity>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   </features>
Dec 10 19:44:31 compute-0 nova_compute[188320]: </domainCapabilities>
Dec 10 19:44:31 compute-0 nova_compute[188320]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:30.991 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 10 19:44:31 compute-0 nova_compute[188320]: <domainCapabilities>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   <path>/usr/libexec/qemu-kvm</path>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   <domain>kvm</domain>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   <arch>x86_64</arch>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   <vcpu max='4096'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   <iothreads supported='yes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   <os supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <enum name='firmware'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <value>efi</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <loader supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>rom</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>pflash</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='readonly'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>yes</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>no</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='secure'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>yes</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>no</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </loader>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   </os>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   <cpu>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <mode name='host-passthrough' supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='hostPassthroughMigratable'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>on</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>off</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <mode name='maximum' supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='maximumMigratable'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>on</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>off</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <mode name='host-model' supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <vendor>AMD</vendor>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='x2apic'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='tsc-deadline'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='hypervisor'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='tsc_adjust'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='spec-ctrl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='stibp'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='ssbd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='cmp_legacy'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='overflow-recov'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='succor'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='ibrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='amd-ssbd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='virt-ssbd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='lbrv'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='tsc-scale'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='vmcb-clean'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='flushbyasid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='pause-filter'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='pfthreshold'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='svme-addr-chk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <feature policy='disable' name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <mode name='custom' supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell-noTSX'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Broadwell-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cascadelake-Server-v5'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cooperlake'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cooperlake-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Cooperlake-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Denverton'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Denverton-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Denverton-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Denverton-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Dhyana-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Genoa'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amd-psfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='auto-ibrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='stibp-always-on'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Genoa-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amd-psfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='auto-ibrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='stibp-always-on'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Milan'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Milan-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Milan-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amd-psfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='stibp-always-on'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-Rome-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='EPYC-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='GraniteRapids'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='prefetchiti'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='GraniteRapids-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='prefetchiti'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='GraniteRapids-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx10'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx10-128'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx10-256'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx10-512'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='prefetchiti'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell-noTSX'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Haswell-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-noTSX'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v5'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v6'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Icelake-Server-v7'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='IvyBridge'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='IvyBridge-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='IvyBridge-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='IvyBridge-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='KnightsMill'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512er'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512pf'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='KnightsMill-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512er'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512pf'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Opteron_G4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Opteron_G4-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Opteron_G5'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tbm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Opteron_G5-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fma4'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tbm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xop'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='SapphireRapids-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='amx-tile'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-bf16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-fp16'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bitalg'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrc'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fzrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='la57'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='taa-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xfd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='SierraForest'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cmpccxadd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='SierraForest-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-ifma'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cmpccxadd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fbsdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='fsrs'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ibrs-all'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mcdt-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pbrsb-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='psdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='serialize'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vaes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Client-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='hle'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='rtm'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Skylake-Server-v5'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512bw'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512cd'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512dq'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512f'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='avx512vl'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='invpcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pcid'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='pku'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Snowridge'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='mpx'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v2'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v3'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='core-capability'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='split-lock-detect'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='Snowridge-v4'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='cldemote'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='erms'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='gfni'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdir64b'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='movdiri'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='xsaves'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='athlon'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='athlon-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='core2duo'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='core2duo-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='coreduo'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='coreduo-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='n270'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='n270-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='ss'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='phenom'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <blockers model='phenom-v1'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnow'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <feature name='3dnowext'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </blockers>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </mode>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   </cpu>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   <memoryBacking supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <enum name='sourceType'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <value>file</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <value>anonymous</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <value>memfd</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   </memoryBacking>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   <devices>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <disk supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='diskDevice'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>disk</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>cdrom</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>floppy</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>lun</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='bus'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>fdc</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>scsi</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>usb</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>sata</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio-transitional</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio-non-transitional</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </disk>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <graphics supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>vnc</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>egl-headless</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>dbus</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </graphics>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <video supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='modelType'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>vga</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>cirrus</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>none</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>bochs</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>ramfb</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </video>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <hostdev supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='mode'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>subsystem</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='startupPolicy'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>default</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>mandatory</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>requisite</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>optional</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='subsysType'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>usb</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>pci</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>scsi</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='capsType'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='pciBackend'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </hostdev>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <rng supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio-transitional</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtio-non-transitional</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='backendModel'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>random</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>egd</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>builtin</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </rng>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <filesystem supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='driverType'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>path</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>handle</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>virtiofs</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </filesystem>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <tpm supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>tpm-tis</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>tpm-crb</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='backendModel'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>emulator</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>external</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='backendVersion'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>2.0</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </tpm>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <redirdev supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='bus'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>usb</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </redirdev>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <channel supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>pty</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>unix</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </channel>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <crypto supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='model'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>qemu</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='backendModel'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>builtin</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </crypto>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <interface supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='backendType'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>default</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>passt</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </interface>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <panic supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='model'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>isa</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>hyperv</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </panic>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <console supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='type'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>null</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>vc</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>pty</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>dev</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>file</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>pipe</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>stdio</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>udp</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>tcp</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>unix</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>qemu-vdagent</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>dbus</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </console>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   </devices>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   <features>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <gic supported='no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <vmcoreinfo supported='yes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <genid supported='yes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <backingStoreInput supported='yes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <backup supported='yes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <async-teardown supported='yes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <ps2 supported='yes'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <sev supported='no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <sgx supported='no'/>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <hyperv supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='features'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>relaxed</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>vapic</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>spinlocks</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>vpindex</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>runtime</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>synic</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>stimer</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>reset</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>vendor_id</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>frequencies</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>reenlightenment</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>tlbflush</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>ipi</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>avic</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>emsr_bitmap</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>xmm_input</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <defaults>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <spinlocks>4095</spinlocks>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <stimer_direct>on</stimer_direct>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <tlbflush_direct>on</tlbflush_direct>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <tlbflush_extended>on</tlbflush_extended>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </defaults>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </hyperv>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     <launchSecurity supported='yes'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       <enum name='sectype'>
Dec 10 19:44:31 compute-0 nova_compute[188320]:         <value>tdx</value>
Dec 10 19:44:31 compute-0 nova_compute[188320]:       </enum>
Dec 10 19:44:31 compute-0 nova_compute[188320]:     </launchSecurity>
Dec 10 19:44:31 compute-0 nova_compute[188320]:   </features>
Dec 10 19:44:31 compute-0 nova_compute[188320]: </domainCapabilities>
Dec 10 19:44:31 compute-0 nova_compute[188320]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.059 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.059 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.059 188324 DEBUG nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.060 188324 INFO nova.virt.libvirt.host [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Secure Boot support detected
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.062 188324 INFO nova.virt.libvirt.driver [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.063 188324 INFO nova.virt.libvirt.driver [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.073 188324 DEBUG nova.virt.libvirt.driver [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.107 188324 INFO nova.virt.node [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Determined node identity fc709657-cb59-4c0b-8f54-5be8a24ab091 from /var/lib/nova/compute_id
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.131 188324 WARNING nova.compute.manager [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Compute nodes ['fc709657-cb59-4c0b-8f54-5be8a24ab091'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.170 188324 INFO nova.compute.manager [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.212 188324 WARNING nova.compute.manager [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.213 188324 DEBUG oslo_concurrency.lockutils [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.213 188324 DEBUG oslo_concurrency.lockutils [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.213 188324 DEBUG oslo_concurrency.lockutils [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.213 188324 DEBUG nova.compute.resource_tracker [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:44:31 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 10 19:44:31 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 10 19:44:31 compute-0 sudo[189211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqjkqbbeemixdzbotyiolzhokbxbgrmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395870.9779437-1537-131124543675071/AnsiballZ_systemd.py'
Dec 10 19:44:31 compute-0 sudo[189211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.496 188324 WARNING nova.virt.libvirt.driver [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.497 188324 DEBUG nova.compute.resource_tracker [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6044MB free_disk=72.60320663452148GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.497 188324 DEBUG oslo_concurrency.lockutils [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.498 188324 DEBUG oslo_concurrency.lockutils [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.513 188324 WARNING nova.compute.resource_tracker [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] No compute node record for compute-0.ctlplane.example.com:fc709657-cb59-4c0b-8f54-5be8a24ab091: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host fc709657-cb59-4c0b-8f54-5be8a24ab091 could not be found.
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.531 188324 INFO nova.compute.resource_tracker [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: fc709657-cb59-4c0b-8f54-5be8a24ab091
Dec 10 19:44:31 compute-0 python3.9[189216]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.594 188324 DEBUG nova.compute.resource_tracker [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:44:31 compute-0 nova_compute[188320]: 2025-12-10 19:44:31.594 188324 DEBUG nova.compute.resource_tracker [None req-1c9a2560-7b6c-4005-98d1-f960d5ff6dc7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:44:31 compute-0 systemd[1]: Stopping nova_compute container...
Dec 10 19:44:31 compute-0 virtqemud[188902]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 10 19:44:31 compute-0 virtqemud[188902]: hostname: compute-0
Dec 10 19:44:31 compute-0 virtqemud[188902]: End of file while reading data: Input/output error
Dec 10 19:44:31 compute-0 systemd[1]: libpod-db08fb50611798e19b114868fa498f72abc76646ac04b0303378498aba6fe786.scope: Deactivated successfully.
Dec 10 19:44:31 compute-0 systemd[1]: libpod-db08fb50611798e19b114868fa498f72abc76646ac04b0303378498aba6fe786.scope: Consumed 2.969s CPU time.
Dec 10 19:44:31 compute-0 podman[189222]: 2025-12-10 19:44:31.72073074 +0000 UTC m=+0.073293137 container died db08fb50611798e19b114868fa498f72abc76646ac04b0303378498aba6fe786 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 10 19:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-db08fb50611798e19b114868fa498f72abc76646ac04b0303378498aba6fe786-userdata-shm.mount: Deactivated successfully.
Dec 10 19:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-096afaa62f653e6452231c4154f5f719b452b6b3afa6087cdf650ea547737ce8-merged.mount: Deactivated successfully.
Dec 10 19:44:31 compute-0 podman[189222]: 2025-12-10 19:44:31.781768135 +0000 UTC m=+0.134330532 container cleanup db08fb50611798e19b114868fa498f72abc76646ac04b0303378498aba6fe786 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:44:31 compute-0 podman[189222]: nova_compute
Dec 10 19:44:31 compute-0 podman[189250]: nova_compute
Dec 10 19:44:31 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 10 19:44:31 compute-0 systemd[1]: Stopped nova_compute container.
Dec 10 19:44:31 compute-0 systemd[1]: Starting nova_compute container...
Dec 10 19:44:32 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096afaa62f653e6452231c4154f5f719b452b6b3afa6087cdf650ea547737ce8/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 10 19:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096afaa62f653e6452231c4154f5f719b452b6b3afa6087cdf650ea547737ce8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 10 19:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096afaa62f653e6452231c4154f5f719b452b6b3afa6087cdf650ea547737ce8/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 10 19:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096afaa62f653e6452231c4154f5f719b452b6b3afa6087cdf650ea547737ce8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 10 19:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096afaa62f653e6452231c4154f5f719b452b6b3afa6087cdf650ea547737ce8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 10 19:44:32 compute-0 podman[189263]: 2025-12-10 19:44:32.040502688 +0000 UTC m=+0.137083735 container init db08fb50611798e19b114868fa498f72abc76646ac04b0303378498aba6fe786 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true)
Dec 10 19:44:32 compute-0 podman[189263]: 2025-12-10 19:44:32.054450574 +0000 UTC m=+0.151031571 container start db08fb50611798e19b114868fa498f72abc76646ac04b0303378498aba6fe786 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:44:32 compute-0 podman[189263]: nova_compute
Dec 10 19:44:32 compute-0 nova_compute[189279]: + sudo -E kolla_set_configs
Dec 10 19:44:32 compute-0 systemd[1]: Started nova_compute container.
Dec 10 19:44:32 compute-0 sudo[189211]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Validating config file
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Copying service configuration files
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Deleting /etc/ceph
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Creating directory /etc/ceph
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Setting permission for /etc/ceph
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Writing out command to execute
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 10 19:44:32 compute-0 nova_compute[189279]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 10 19:44:32 compute-0 nova_compute[189279]: ++ cat /run_command
Dec 10 19:44:32 compute-0 nova_compute[189279]: + CMD=nova-compute
Dec 10 19:44:32 compute-0 nova_compute[189279]: + ARGS=
Dec 10 19:44:32 compute-0 nova_compute[189279]: + sudo kolla_copy_cacerts
Dec 10 19:44:32 compute-0 nova_compute[189279]: + [[ ! -n '' ]]
Dec 10 19:44:32 compute-0 nova_compute[189279]: + . kolla_extend_start
Dec 10 19:44:32 compute-0 nova_compute[189279]: Running command: 'nova-compute'
Dec 10 19:44:32 compute-0 nova_compute[189279]: + echo 'Running command: '\''nova-compute'\'''
Dec 10 19:44:32 compute-0 nova_compute[189279]: + umask 0022
Dec 10 19:44:32 compute-0 nova_compute[189279]: + exec nova-compute
Dec 10 19:44:32 compute-0 sudo[189440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obkyddphojglhobstuftnfnzhmsjomnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395872.3685105-1546-117901674691967/AnsiballZ_podman_container.py'
Dec 10 19:44:32 compute-0 sudo[189440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:33 compute-0 python3.9[189442]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 10 19:44:33 compute-0 systemd[1]: Started libpod-conmon-6bb2ce7ac9bb25098a049b3550cabc6983eed6442d6b544ebc9cd011635647bb.scope.
Dec 10 19:44:33 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f94f6ffeb2e5b5ae873383f107465ce0e1a54211f99e0604bc76d4f3458a47fb/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 10 19:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f94f6ffeb2e5b5ae873383f107465ce0e1a54211f99e0604bc76d4f3458a47fb/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 10 19:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f94f6ffeb2e5b5ae873383f107465ce0e1a54211f99e0604bc76d4f3458a47fb/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 10 19:44:33 compute-0 podman[189469]: 2025-12-10 19:44:33.354128252 +0000 UTC m=+0.161296278 container init 6bb2ce7ac9bb25098a049b3550cabc6983eed6442d6b544ebc9cd011635647bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 10 19:44:33 compute-0 podman[189469]: 2025-12-10 19:44:33.363407383 +0000 UTC m=+0.170575409 container start 6bb2ce7ac9bb25098a049b3550cabc6983eed6442d6b544ebc9cd011635647bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Dec 10 19:44:33 compute-0 python3.9[189442]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 10 19:44:33 compute-0 nova_compute_init[189491]: INFO:nova_statedir:Applying nova statedir ownership
Dec 10 19:44:33 compute-0 nova_compute_init[189491]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 10 19:44:33 compute-0 nova_compute_init[189491]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 10 19:44:33 compute-0 nova_compute_init[189491]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 10 19:44:33 compute-0 nova_compute_init[189491]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 10 19:44:33 compute-0 nova_compute_init[189491]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 10 19:44:33 compute-0 nova_compute_init[189491]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 10 19:44:33 compute-0 nova_compute_init[189491]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 10 19:44:33 compute-0 nova_compute_init[189491]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 10 19:44:33 compute-0 nova_compute_init[189491]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 10 19:44:33 compute-0 nova_compute_init[189491]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 10 19:44:33 compute-0 nova_compute_init[189491]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 10 19:44:33 compute-0 nova_compute_init[189491]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 10 19:44:33 compute-0 nova_compute_init[189491]: INFO:nova_statedir:Nova statedir ownership complete
Dec 10 19:44:33 compute-0 systemd[1]: libpod-6bb2ce7ac9bb25098a049b3550cabc6983eed6442d6b544ebc9cd011635647bb.scope: Deactivated successfully.
Dec 10 19:44:33 compute-0 conmon[189484]: conmon 6bb2ce7ac9bb25098a04 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6bb2ce7ac9bb25098a049b3550cabc6983eed6442d6b544ebc9cd011635647bb.scope/container/memory.events
Dec 10 19:44:33 compute-0 podman[189506]: 2025-12-10 19:44:33.491567066 +0000 UTC m=+0.034454609 container died 6bb2ce7ac9bb25098a049b3550cabc6983eed6442d6b544ebc9cd011635647bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 10 19:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6bb2ce7ac9bb25098a049b3550cabc6983eed6442d6b544ebc9cd011635647bb-userdata-shm.mount: Deactivated successfully.
Dec 10 19:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f94f6ffeb2e5b5ae873383f107465ce0e1a54211f99e0604bc76d4f3458a47fb-merged.mount: Deactivated successfully.
Dec 10 19:44:33 compute-0 podman[189506]: 2025-12-10 19:44:33.530131676 +0000 UTC m=+0.073019199 container cleanup 6bb2ce7ac9bb25098a049b3550cabc6983eed6442d6b544ebc9cd011635647bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible)
Dec 10 19:44:33 compute-0 sudo[189440]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:33 compute-0 systemd[1]: libpod-conmon-6bb2ce7ac9bb25098a049b3550cabc6983eed6442d6b544ebc9cd011635647bb.scope: Deactivated successfully.
Dec 10 19:44:33 compute-0 podman[189505]: 2025-12-10 19:44:33.5736931 +0000 UTC m=+0.098242129 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 10 19:44:34 compute-0 sshd-session[161188]: Connection closed by 192.168.122.30 port 43246
Dec 10 19:44:34 compute-0 sshd-session[161185]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:44:34 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Dec 10 19:44:34 compute-0 systemd[1]: session-24.scope: Consumed 1min 56.610s CPU time.
Dec 10 19:44:34 compute-0 systemd-logind[789]: Session 24 logged out. Waiting for processes to exit.
Dec 10 19:44:34 compute-0 systemd-logind[789]: Removed session 24.
Dec 10 19:44:34 compute-0 nova_compute[189279]: 2025-12-10 19:44:34.289 189283 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 10 19:44:34 compute-0 nova_compute[189279]: 2025-12-10 19:44:34.290 189283 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 10 19:44:34 compute-0 nova_compute[189279]: 2025-12-10 19:44:34.290 189283 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 10 19:44:34 compute-0 nova_compute[189279]: 2025-12-10 19:44:34.290 189283 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 10 19:44:34 compute-0 nova_compute[189279]: 2025-12-10 19:44:34.450 189283 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:44:34 compute-0 nova_compute[189279]: 2025-12-10 19:44:34.477 189283 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:44:34 compute-0 nova_compute[189279]: 2025-12-10 19:44:34.477 189283 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.270 189283 INFO nova.virt.driver [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.371 189283 INFO nova.compute.provider_config [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.385 189283 DEBUG oslo_concurrency.lockutils [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.385 189283 DEBUG oslo_concurrency.lockutils [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.385 189283 DEBUG oslo_concurrency.lockutils [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.385 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.386 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.386 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.386 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.386 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.386 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.387 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.387 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.387 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.387 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.387 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.387 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.388 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.388 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.388 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.388 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.388 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.389 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.389 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.389 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.389 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.389 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.390 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.390 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.390 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.390 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.390 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.390 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.391 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.391 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.391 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.391 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.391 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.392 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.392 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.392 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.392 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.392 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.393 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.393 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.393 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.393 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.393 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.394 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.394 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.394 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.394 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.395 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.395 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.395 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.395 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.395 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.395 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.396 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.396 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.396 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.396 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.396 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.397 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.397 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.397 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.397 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.397 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.397 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.398 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.398 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.398 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.398 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.398 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.398 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.399 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.399 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.399 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.399 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.399 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.400 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.400 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.400 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.400 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.400 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.401 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.401 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.401 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.401 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.401 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.402 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.402 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.402 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.402 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.403 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.403 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.403 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.403 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.403 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.403 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.404 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.404 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.404 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.404 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.404 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.405 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.405 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.405 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.405 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.405 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.405 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.406 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.406 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.406 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.406 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.406 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.407 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.407 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.407 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.407 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.407 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.408 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.408 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.408 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.408 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.408 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.409 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.409 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.409 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.409 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.409 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.409 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.410 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.410 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.410 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.410 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.410 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.411 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.411 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.411 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.411 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.411 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.411 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.412 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.412 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.412 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.412 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.412 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.413 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.413 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.413 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.413 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.413 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.413 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.414 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.414 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.414 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.414 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.414 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.415 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.415 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.415 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.415 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.415 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.416 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.416 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.416 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.416 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.417 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.417 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.417 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.417 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.417 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.418 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.418 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.418 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.418 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.418 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.418 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.419 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.419 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.419 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.419 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.419 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.420 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.420 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.420 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.420 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.420 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.421 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.421 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.421 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.421 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.421 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.421 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.422 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.422 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.422 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.422 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.422 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.423 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.423 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.423 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.423 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.423 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.423 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.424 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.424 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.424 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.424 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.424 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.425 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.425 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.425 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.425 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.425 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.425 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.426 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.426 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.426 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.426 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.426 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.427 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.427 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.427 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.427 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.427 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.428 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.428 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.428 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.428 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.428 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.428 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.429 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.429 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.429 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.429 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.429 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.430 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.430 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.430 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.430 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.430 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.430 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.431 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.431 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.431 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.431 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.431 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.432 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.432 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.432 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.432 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.432 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.433 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.433 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.433 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.433 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.433 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.434 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.434 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.434 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.434 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.434 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.434 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.435 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.435 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.435 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.435 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.435 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.436 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.436 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.436 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.436 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.437 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.437 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.437 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.437 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.437 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.437 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.438 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.438 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.438 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.438 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.438 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.438 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.439 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.439 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.439 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.439 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.439 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.440 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.440 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.440 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.440 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.440 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.441 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.441 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.441 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.441 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.441 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.442 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.442 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.442 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.442 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.442 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.443 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.443 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.443 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.443 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.443 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.444 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.444 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.444 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.444 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.444 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.445 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.445 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.445 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.445 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.445 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.446 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.446 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.446 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.446 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.446 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.446 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.447 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.447 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.447 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.447 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.447 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.448 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.448 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.448 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.448 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.448 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.449 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.449 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.449 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.449 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.449 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.449 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.450 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.450 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.450 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.450 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.450 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.451 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.451 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.451 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.451 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.451 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.451 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.452 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.452 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.452 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.452 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.452 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.453 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.453 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.453 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.453 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.454 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.454 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.454 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.454 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.454 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.455 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.455 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.455 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.455 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.455 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.455 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.456 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.456 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.456 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.456 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.456 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.457 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.457 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.457 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.457 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.457 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.458 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.458 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.458 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.458 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.458 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.458 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.459 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.459 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.459 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.459 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.459 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.460 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.460 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.460 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.460 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.460 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.461 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.461 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.461 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.461 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.461 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.462 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.462 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.462 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.462 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.462 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.462 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.463 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.463 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.463 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.463 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.463 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.464 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.464 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.464 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.464 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.464 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.465 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.465 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.465 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.465 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.465 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.466 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.466 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.466 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.466 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.466 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.467 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.467 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.467 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.467 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.467 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.468 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.468 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.468 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.468 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.468 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.469 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.469 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.469 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.469 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.469 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.470 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.470 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.470 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.470 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.470 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.470 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.471 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.471 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.471 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.471 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.471 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.472 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.472 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.472 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.472 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.472 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.473 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.473 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.473 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.473 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.473 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.474 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.474 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.474 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.474 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.475 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.475 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.475 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.475 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.475 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.475 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.476 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.476 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.476 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.476 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.477 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.477 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.477 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.477 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.477 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.477 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.478 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.478 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.478 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.478 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.478 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.479 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.479 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.479 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.479 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.479 189283 WARNING oslo_config.cfg [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 10 19:44:35 compute-0 nova_compute[189279]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 10 19:44:35 compute-0 nova_compute[189279]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 10 19:44:35 compute-0 nova_compute[189279]: and ``live_migration_inbound_addr`` respectively.
Dec 10 19:44:35 compute-0 nova_compute[189279]: ).  Its value may be silently ignored in the future.
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.480 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.480 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.480 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.480 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.480 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.481 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.481 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.481 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.481 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.482 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.482 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.482 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.482 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.482 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.483 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.483 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.483 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.483 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.484 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.484 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.484 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.484 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.484 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.485 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.485 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.485 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.485 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.486 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.486 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.486 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.486 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.487 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.487 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.487 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.487 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.487 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.488 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.488 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.488 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.488 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.488 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.489 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.489 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.489 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.489 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.489 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.490 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.490 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.490 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.490 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.490 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.491 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.491 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.491 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.491 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.491 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.492 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.492 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.492 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.492 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.492 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.493 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.493 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.493 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.493 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.493 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.494 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.494 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.494 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.494 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.495 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.495 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.495 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.495 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.495 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.496 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.496 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.496 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.496 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.496 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.497 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.497 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.497 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.497 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.497 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.498 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.498 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.498 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.498 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.498 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.499 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.499 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.499 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.499 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.499 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.500 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.500 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.500 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.500 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.500 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.501 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.501 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.501 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.501 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.501 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.502 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.502 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.502 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.502 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.502 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.503 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.503 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.503 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.503 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.503 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.504 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.504 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.504 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.504 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.504 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.505 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.505 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.505 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.505 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.505 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.506 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.506 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.506 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.506 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.506 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.507 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.507 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.507 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.507 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.507 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.507 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.508 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.508 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.508 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.509 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.509 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.509 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.509 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.509 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.510 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.510 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.510 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.510 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.511 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.511 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.511 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.511 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.511 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.512 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.512 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.512 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.512 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.512 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.513 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.513 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.513 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.513 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.513 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.514 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.514 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.514 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.514 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.514 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.515 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.515 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.515 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.515 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.515 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.516 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.516 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.516 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.516 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.516 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.517 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.517 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.517 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.517 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.517 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.518 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.518 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.518 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.518 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.519 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.519 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.519 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.519 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.519 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.519 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.520 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.520 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.520 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.520 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.521 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.521 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.521 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.521 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.521 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.522 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.522 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.522 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.522 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.522 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.523 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.523 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.523 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.523 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.523 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.524 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.524 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.524 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.524 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.524 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.525 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.525 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.525 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.525 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.525 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.526 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.526 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.526 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.526 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.526 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.526 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.527 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.527 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.527 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.527 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.527 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.528 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.528 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.528 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.528 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.528 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.529 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.529 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.529 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.529 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.529 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.530 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.530 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.530 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.530 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.531 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.531 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.531 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.531 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.532 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.532 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.532 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.532 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.533 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.533 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.533 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.533 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.533 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.534 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.534 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.534 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.534 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.534 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.535 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.535 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.535 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.535 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.535 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.535 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.536 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.536 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.536 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.536 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.537 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.537 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.537 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.537 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.538 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.538 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.538 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.538 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.539 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.539 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.539 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.539 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.539 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.540 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.540 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.540 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.540 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.540 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.541 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.541 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.541 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.541 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.541 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.542 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.542 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.542 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.542 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.542 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.543 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.543 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.543 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.543 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.543 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.544 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.544 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.544 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.544 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.544 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.544 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.545 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.545 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.545 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.545 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.545 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.546 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.546 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.546 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.546 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.546 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.546 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.547 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.547 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.547 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.547 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.548 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.548 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.548 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.548 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.548 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.549 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.549 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.549 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.549 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.550 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.550 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.550 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.550 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.550 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.551 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.551 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.551 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.551 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.551 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.552 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.552 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.552 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.552 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.552 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.552 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.553 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.553 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.553 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.553 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.553 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.554 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.554 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.554 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.554 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.554 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.554 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.555 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.555 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.555 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.555 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.555 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.556 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.556 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.556 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.556 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.556 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.556 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.557 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.557 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.557 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.557 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.557 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.558 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.558 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.558 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.558 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.558 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.559 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.559 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.559 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.559 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.559 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.559 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.560 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.560 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.560 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.560 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.560 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.561 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.561 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.561 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.561 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.561 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.562 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.562 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.562 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.562 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.562 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.562 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.563 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.563 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.563 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.563 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.563 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.564 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.564 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.564 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.564 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.564 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.565 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.565 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.565 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.565 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.566 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.566 189283 DEBUG oslo_service.service [None req-ddc4eaf6-7469-431c-914d-9b493c03de96 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.567 189283 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.583 189283 INFO nova.virt.node [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Determined node identity fc709657-cb59-4c0b-8f54-5be8a24ab091 from /var/lib/nova/compute_id
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.585 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.586 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.586 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.587 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.601 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f21e0370f10> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.603 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f21e0370f10> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.604 189283 INFO nova.virt.libvirt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Connection event '1' reason 'None'
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.613 189283 INFO nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Libvirt host capabilities <capabilities>
Dec 10 19:44:35 compute-0 nova_compute[189279]: 
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <host>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <uuid>be15d2aa-13e4-4b81-819a-074ddfd2ac46</uuid>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <cpu>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <arch>x86_64</arch>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model>EPYC-Rome-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <vendor>AMD</vendor>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <microcode version='16777317'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <signature family='23' model='49' stepping='0'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='x2apic'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='tsc-deadline'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='osxsave'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='hypervisor'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='tsc_adjust'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='spec-ctrl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='stibp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='arch-capabilities'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='ssbd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='cmp_legacy'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='topoext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='virt-ssbd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='lbrv'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='tsc-scale'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='vmcb-clean'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='pause-filter'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='pfthreshold'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='svme-addr-chk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='rdctl-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='skip-l1dfl-vmentry'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='mds-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature name='pschange-mc-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <pages unit='KiB' size='4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <pages unit='KiB' size='2048'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <pages unit='KiB' size='1048576'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </cpu>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <power_management>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <suspend_mem/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <suspend_disk/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <suspend_hybrid/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </power_management>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <iommu support='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <migration_features>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <live/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <uri_transports>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <uri_transport>tcp</uri_transport>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <uri_transport>rdma</uri_transport>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </uri_transports>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </migration_features>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <topology>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <cells num='1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <cell id='0'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:           <memory unit='KiB'>7864308</memory>
Dec 10 19:44:35 compute-0 nova_compute[189279]:           <pages unit='KiB' size='4'>1966077</pages>
Dec 10 19:44:35 compute-0 nova_compute[189279]:           <pages unit='KiB' size='2048'>0</pages>
Dec 10 19:44:35 compute-0 nova_compute[189279]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 10 19:44:35 compute-0 nova_compute[189279]:           <distances>
Dec 10 19:44:35 compute-0 nova_compute[189279]:             <sibling id='0' value='10'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:           </distances>
Dec 10 19:44:35 compute-0 nova_compute[189279]:           <cpus num='8'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:           </cpus>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         </cell>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </cells>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </topology>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <cache>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </cache>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <secmodel>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model>selinux</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <doi>0</doi>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </secmodel>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <secmodel>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model>dac</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <doi>0</doi>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </secmodel>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </host>
Dec 10 19:44:35 compute-0 nova_compute[189279]: 
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <guest>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <os_type>hvm</os_type>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <arch name='i686'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <wordsize>32</wordsize>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <domain type='qemu'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <domain type='kvm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </arch>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <features>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <pae/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <nonpae/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <acpi default='on' toggle='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <apic default='on' toggle='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <cpuselection/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <deviceboot/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <disksnapshot default='on' toggle='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <externalSnapshot/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </features>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </guest>
Dec 10 19:44:35 compute-0 nova_compute[189279]: 
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <guest>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <os_type>hvm</os_type>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <arch name='x86_64'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <wordsize>64</wordsize>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <domain type='qemu'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <domain type='kvm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </arch>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <features>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <acpi default='on' toggle='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <apic default='on' toggle='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <cpuselection/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <deviceboot/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <disksnapshot default='on' toggle='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <externalSnapshot/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </features>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </guest>
Dec 10 19:44:35 compute-0 nova_compute[189279]: 
Dec 10 19:44:35 compute-0 nova_compute[189279]: </capabilities>
Dec 10 19:44:35 compute-0 nova_compute[189279]: 
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.620 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.622 189283 DEBUG nova.virt.libvirt.volume.mount [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.625 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 10 19:44:35 compute-0 nova_compute[189279]: <domainCapabilities>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <path>/usr/libexec/qemu-kvm</path>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <domain>kvm</domain>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <arch>i686</arch>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <vcpu max='240'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <iothreads supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <os supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <enum name='firmware'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <loader supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>rom</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pflash</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='readonly'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>yes</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>no</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='secure'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>no</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </loader>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </os>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <cpu>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='host-passthrough' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='hostPassthroughMigratable'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>on</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>off</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='maximum' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='maximumMigratable'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>on</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>off</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='host-model' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <vendor>AMD</vendor>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='x2apic'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='tsc-deadline'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='hypervisor'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='tsc_adjust'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='spec-ctrl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='stibp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='ssbd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='cmp_legacy'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='overflow-recov'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='succor'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='ibrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='amd-ssbd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='virt-ssbd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='lbrv'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='tsc-scale'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='vmcb-clean'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='flushbyasid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='pause-filter'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='pfthreshold'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='svme-addr-chk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='disable' name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='custom' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cooperlake'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cooperlake-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cooperlake-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Dhyana-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Genoa'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amd-psfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='auto-ibrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='stibp-always-on'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Genoa-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amd-psfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='auto-ibrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='stibp-always-on'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Milan'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Milan-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Milan-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amd-psfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='stibp-always-on'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='GraniteRapids'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='prefetchiti'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='GraniteRapids-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='prefetchiti'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='GraniteRapids-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10-128'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10-256'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10-512'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='prefetchiti'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v6'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v7'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='KnightsMill'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512er'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512pf'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='KnightsMill-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512er'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512pf'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G4-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tbm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G5-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tbm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SierraForest'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cmpccxadd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SierraForest-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cmpccxadd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='athlon'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='athlon-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='core2duo'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='core2duo-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='coreduo'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='coreduo-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='n270'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='n270-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='phenom'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='phenom-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </cpu>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <memoryBacking supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <enum name='sourceType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>file</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>anonymous</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>memfd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </memoryBacking>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <devices>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <disk supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='diskDevice'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>disk</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>cdrom</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>floppy</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>lun</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='bus'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>ide</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>fdc</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>scsi</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>usb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>sata</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-non-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <graphics supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vnc</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>egl-headless</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>dbus</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </graphics>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <video supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='modelType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vga</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>cirrus</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>none</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>bochs</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>ramfb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </video>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <hostdev supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='mode'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>subsystem</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='startupPolicy'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>default</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>mandatory</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>requisite</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>optional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='subsysType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>usb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pci</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>scsi</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='capsType'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='pciBackend'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </hostdev>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <rng supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-non-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendModel'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>random</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>egd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>builtin</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </rng>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <filesystem supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='driverType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>path</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>handle</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtiofs</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </filesystem>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <tpm supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tpm-tis</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tpm-crb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendModel'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>emulator</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>external</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendVersion'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>2.0</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </tpm>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <redirdev supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='bus'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>usb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </redirdev>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <channel supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pty</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>unix</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </channel>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <crypto supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>qemu</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendModel'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>builtin</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </crypto>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <interface supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>default</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>passt</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </interface>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <panic supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>isa</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>hyperv</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </panic>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <console supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>null</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vc</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pty</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>dev</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>file</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pipe</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>stdio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>udp</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tcp</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>unix</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>qemu-vdagent</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>dbus</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </console>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </devices>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <features>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <gic supported='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <vmcoreinfo supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <genid supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <backingStoreInput supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <backup supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <async-teardown supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <ps2 supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <sev supported='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <sgx supported='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <hyperv supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='features'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>relaxed</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vapic</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>spinlocks</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vpindex</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>runtime</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>synic</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>stimer</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>reset</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vendor_id</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>frequencies</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>reenlightenment</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tlbflush</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>ipi</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>avic</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>emsr_bitmap</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>xmm_input</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <defaults>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <spinlocks>4095</spinlocks>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <stimer_direct>on</stimer_direct>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <tlbflush_direct>on</tlbflush_direct>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <tlbflush_extended>on</tlbflush_extended>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </defaults>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </hyperv>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <launchSecurity supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='sectype'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tdx</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </launchSecurity>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </features>
Dec 10 19:44:35 compute-0 nova_compute[189279]: </domainCapabilities>
Dec 10 19:44:35 compute-0 nova_compute[189279]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.633 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 10 19:44:35 compute-0 nova_compute[189279]: <domainCapabilities>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <path>/usr/libexec/qemu-kvm</path>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <domain>kvm</domain>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <arch>i686</arch>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <vcpu max='4096'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <iothreads supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <os supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <enum name='firmware'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <loader supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>rom</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pflash</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='readonly'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>yes</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>no</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='secure'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>no</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </loader>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </os>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <cpu>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='host-passthrough' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='hostPassthroughMigratable'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>on</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>off</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='maximum' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='maximumMigratable'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>on</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>off</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='host-model' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <vendor>AMD</vendor>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='x2apic'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='tsc-deadline'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='hypervisor'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='tsc_adjust'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='spec-ctrl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='stibp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='ssbd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='cmp_legacy'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='overflow-recov'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='succor'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='ibrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='amd-ssbd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='virt-ssbd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='lbrv'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='tsc-scale'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='vmcb-clean'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='flushbyasid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='pause-filter'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='pfthreshold'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='svme-addr-chk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='disable' name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='custom' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cooperlake'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cooperlake-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cooperlake-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Dhyana-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Genoa'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amd-psfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='auto-ibrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='stibp-always-on'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Genoa-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amd-psfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='auto-ibrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='stibp-always-on'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Milan'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Milan-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Milan-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amd-psfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='stibp-always-on'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='GraniteRapids'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='prefetchiti'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='GraniteRapids-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='prefetchiti'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='GraniteRapids-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10-128'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10-256'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10-512'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='prefetchiti'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v6'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v7'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='KnightsMill'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512er'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512pf'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='KnightsMill-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512er'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512pf'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G4-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tbm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G5-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tbm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SierraForest'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cmpccxadd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SierraForest-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cmpccxadd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='athlon'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='athlon-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='core2duo'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='core2duo-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='coreduo'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='coreduo-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='n270'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='n270-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='phenom'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='phenom-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </cpu>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <memoryBacking supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <enum name='sourceType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>file</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>anonymous</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>memfd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </memoryBacking>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <devices>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <disk supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='diskDevice'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>disk</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>cdrom</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>floppy</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>lun</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='bus'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>fdc</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>scsi</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>usb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>sata</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-non-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <graphics supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vnc</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>egl-headless</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>dbus</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </graphics>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <video supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='modelType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vga</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>cirrus</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>none</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>bochs</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>ramfb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </video>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <hostdev supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='mode'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>subsystem</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='startupPolicy'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>default</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>mandatory</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>requisite</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>optional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='subsysType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>usb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pci</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>scsi</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='capsType'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='pciBackend'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </hostdev>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <rng supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-non-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendModel'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>random</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>egd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>builtin</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </rng>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <filesystem supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='driverType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>path</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>handle</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtiofs</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </filesystem>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <tpm supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tpm-tis</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tpm-crb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendModel'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>emulator</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>external</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendVersion'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>2.0</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </tpm>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <redirdev supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='bus'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>usb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </redirdev>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <channel supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pty</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>unix</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </channel>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <crypto supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>qemu</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendModel'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>builtin</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </crypto>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <interface supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>default</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>passt</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </interface>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <panic supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>isa</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>hyperv</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </panic>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <console supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>null</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vc</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pty</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>dev</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>file</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pipe</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>stdio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>udp</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tcp</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>unix</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>qemu-vdagent</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>dbus</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </console>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </devices>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <features>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <gic supported='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <vmcoreinfo supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <genid supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <backingStoreInput supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <backup supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <async-teardown supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <ps2 supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <sev supported='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <sgx supported='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <hyperv supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='features'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>relaxed</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vapic</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>spinlocks</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vpindex</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>runtime</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>synic</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>stimer</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>reset</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vendor_id</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>frequencies</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>reenlightenment</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tlbflush</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>ipi</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>avic</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>emsr_bitmap</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>xmm_input</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <defaults>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <spinlocks>4095</spinlocks>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <stimer_direct>on</stimer_direct>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <tlbflush_direct>on</tlbflush_direct>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <tlbflush_extended>on</tlbflush_extended>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </defaults>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </hyperv>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <launchSecurity supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='sectype'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tdx</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </launchSecurity>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </features>
Dec 10 19:44:35 compute-0 nova_compute[189279]: </domainCapabilities>
Dec 10 19:44:35 compute-0 nova_compute[189279]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.664 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.670 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 10 19:44:35 compute-0 nova_compute[189279]: <domainCapabilities>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <path>/usr/libexec/qemu-kvm</path>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <domain>kvm</domain>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <arch>x86_64</arch>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <vcpu max='240'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <iothreads supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <os supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <enum name='firmware'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <loader supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>rom</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pflash</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='readonly'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>yes</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>no</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='secure'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>no</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </loader>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </os>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <cpu>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='host-passthrough' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='hostPassthroughMigratable'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>on</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>off</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='maximum' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='maximumMigratable'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>on</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>off</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='host-model' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <vendor>AMD</vendor>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='x2apic'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='tsc-deadline'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='hypervisor'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='tsc_adjust'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='spec-ctrl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='stibp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='ssbd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='cmp_legacy'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='overflow-recov'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='succor'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='ibrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='amd-ssbd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='virt-ssbd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='lbrv'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='tsc-scale'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='vmcb-clean'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='flushbyasid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='pause-filter'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='pfthreshold'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='svme-addr-chk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='disable' name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='custom' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cooperlake'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cooperlake-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cooperlake-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Dhyana-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Genoa'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amd-psfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='auto-ibrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='stibp-always-on'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Genoa-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amd-psfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='auto-ibrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='stibp-always-on'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Milan'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Milan-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Milan-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amd-psfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='stibp-always-on'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='GraniteRapids'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='prefetchiti'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='GraniteRapids-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='prefetchiti'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='GraniteRapids-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10-128'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10-256'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10-512'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='prefetchiti'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v6'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v7'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='KnightsMill'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512er'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512pf'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='KnightsMill-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512er'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512pf'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G4-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tbm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G5-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tbm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SierraForest'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cmpccxadd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SierraForest-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cmpccxadd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='athlon'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='athlon-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='core2duo'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='core2duo-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='coreduo'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='coreduo-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='n270'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='n270-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='phenom'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='phenom-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </cpu>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <memoryBacking supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <enum name='sourceType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>file</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>anonymous</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>memfd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </memoryBacking>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <devices>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <disk supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='diskDevice'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>disk</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>cdrom</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>floppy</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>lun</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='bus'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>ide</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>fdc</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>scsi</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>usb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>sata</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-non-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <graphics supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vnc</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>egl-headless</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>dbus</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </graphics>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <video supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='modelType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vga</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>cirrus</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>none</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>bochs</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>ramfb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </video>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <hostdev supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='mode'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>subsystem</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='startupPolicy'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>default</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>mandatory</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>requisite</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>optional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='subsysType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>usb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pci</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>scsi</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='capsType'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='pciBackend'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </hostdev>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <rng supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-non-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendModel'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>random</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>egd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>builtin</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </rng>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <filesystem supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='driverType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>path</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>handle</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtiofs</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </filesystem>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <tpm supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tpm-tis</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tpm-crb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendModel'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>emulator</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>external</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendVersion'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>2.0</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </tpm>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <redirdev supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='bus'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>usb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </redirdev>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <channel supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pty</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>unix</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </channel>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <crypto supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>qemu</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendModel'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>builtin</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </crypto>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <interface supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>default</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>passt</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </interface>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <panic supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>isa</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>hyperv</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </panic>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <console supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>null</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vc</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pty</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>dev</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>file</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pipe</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>stdio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>udp</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tcp</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>unix</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>qemu-vdagent</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>dbus</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </console>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </devices>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <features>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <gic supported='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <vmcoreinfo supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <genid supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <backingStoreInput supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <backup supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <async-teardown supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <ps2 supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <sev supported='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <sgx supported='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <hyperv supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='features'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>relaxed</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vapic</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>spinlocks</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vpindex</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>runtime</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>synic</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>stimer</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>reset</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vendor_id</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>frequencies</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>reenlightenment</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tlbflush</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>ipi</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>avic</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>emsr_bitmap</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>xmm_input</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <defaults>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <spinlocks>4095</spinlocks>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <stimer_direct>on</stimer_direct>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <tlbflush_direct>on</tlbflush_direct>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <tlbflush_extended>on</tlbflush_extended>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </defaults>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </hyperv>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <launchSecurity supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='sectype'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tdx</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </launchSecurity>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </features>
Dec 10 19:44:35 compute-0 nova_compute[189279]: </domainCapabilities>
Dec 10 19:44:35 compute-0 nova_compute[189279]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.741 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 10 19:44:35 compute-0 nova_compute[189279]: <domainCapabilities>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <path>/usr/libexec/qemu-kvm</path>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <domain>kvm</domain>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <arch>x86_64</arch>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <vcpu max='4096'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <iothreads supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <os supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <enum name='firmware'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>efi</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <loader supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>rom</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pflash</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='readonly'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>yes</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>no</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='secure'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>yes</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>no</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </loader>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </os>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <cpu>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='host-passthrough' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='hostPassthroughMigratable'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>on</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>off</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='maximum' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='maximumMigratable'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>on</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>off</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='host-model' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <vendor>AMD</vendor>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='x2apic'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='tsc-deadline'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='hypervisor'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='tsc_adjust'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='spec-ctrl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='stibp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='ssbd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='cmp_legacy'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='overflow-recov'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='succor'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='ibrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='amd-ssbd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='virt-ssbd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='lbrv'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='tsc-scale'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='vmcb-clean'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='flushbyasid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='pause-filter'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='pfthreshold'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='svme-addr-chk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <feature policy='disable' name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <mode name='custom' supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Broadwell-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cascadelake-Server-v5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cooperlake'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cooperlake-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Cooperlake-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Denverton-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Dhyana-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Genoa'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amd-psfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='auto-ibrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='stibp-always-on'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Genoa-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amd-psfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='auto-ibrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='stibp-always-on'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Milan'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Milan-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Milan-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amd-psfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='no-nested-data-bp'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='null-sel-clr-base'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='stibp-always-on'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-Rome-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='EPYC-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='GraniteRapids'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='prefetchiti'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='GraniteRapids-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='prefetchiti'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='GraniteRapids-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10-128'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10-256'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx10-512'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='prefetchiti'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Haswell-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-noTSX'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v6'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Icelake-Server-v7'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='IvyBridge-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='KnightsMill'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512er'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512pf'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='KnightsMill-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4fmaps'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-4vnniw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512er'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512pf'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G4-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tbm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Opteron_G5-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fma4'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tbm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xop'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SapphireRapids-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='amx-tile'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-bf16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-fp16'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512-vpopcntdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bitalg'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vbmi2'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrc'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fzrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='la57'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='taa-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='tsx-ldtrk'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xfd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SierraForest'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cmpccxadd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='SierraForest-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ifma'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-ne-convert'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx-vnni-int8'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='bus-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cmpccxadd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fbsdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='fsrs'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ibrs-all'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mcdt-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pbrsb-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='psdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='sbdr-ssdp-no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='serialize'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vaes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='vpclmulqdq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Client-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='hle'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='rtm'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Skylake-Server-v5'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512bw'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512cd'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512dq'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512f'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='avx512vl'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='invpcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pcid'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='pku'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='mpx'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v2'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v3'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='core-capability'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='split-lock-detect'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='Snowridge-v4'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='cldemote'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='erms'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='gfni'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdir64b'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='movdiri'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='xsaves'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='athlon'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='athlon-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='core2duo'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='core2duo-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='coreduo'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='coreduo-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='n270'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='n270-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='ss'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='phenom'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <blockers model='phenom-v1'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnow'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <feature name='3dnowext'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </blockers>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </mode>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </cpu>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <memoryBacking supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <enum name='sourceType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>file</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>anonymous</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <value>memfd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </memoryBacking>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <devices>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <disk supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='diskDevice'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>disk</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>cdrom</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>floppy</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>lun</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='bus'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>fdc</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>scsi</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>usb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>sata</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-non-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <graphics supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vnc</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>egl-headless</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>dbus</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </graphics>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <video supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='modelType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vga</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>cirrus</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>none</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>bochs</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>ramfb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </video>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <hostdev supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='mode'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>subsystem</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='startupPolicy'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>default</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>mandatory</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>requisite</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>optional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='subsysType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>usb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pci</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>scsi</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='capsType'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='pciBackend'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </hostdev>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <rng supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtio-non-transitional</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendModel'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>random</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>egd</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>builtin</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </rng>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <filesystem supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='driverType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>path</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>handle</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>virtiofs</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </filesystem>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <tpm supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tpm-tis</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tpm-crb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendModel'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>emulator</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>external</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendVersion'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>2.0</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </tpm>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <redirdev supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='bus'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>usb</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </redirdev>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <channel supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pty</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>unix</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </channel>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <crypto supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>qemu</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendModel'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>builtin</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </crypto>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <interface supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='backendType'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>default</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>passt</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </interface>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <panic supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='model'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>isa</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>hyperv</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </panic>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <console supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='type'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>null</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vc</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pty</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>dev</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>file</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>pipe</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>stdio</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>udp</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tcp</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>unix</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>qemu-vdagent</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>dbus</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </console>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </devices>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   <features>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <gic supported='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <vmcoreinfo supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <genid supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <backingStoreInput supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <backup supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <async-teardown supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <ps2 supported='yes'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <sev supported='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <sgx supported='no'/>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <hyperv supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='features'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>relaxed</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vapic</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>spinlocks</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vpindex</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>runtime</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>synic</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>stimer</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>reset</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>vendor_id</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>frequencies</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>reenlightenment</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tlbflush</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>ipi</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>avic</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>emsr_bitmap</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>xmm_input</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <defaults>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <spinlocks>4095</spinlocks>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <stimer_direct>on</stimer_direct>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <tlbflush_direct>on</tlbflush_direct>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <tlbflush_extended>on</tlbflush_extended>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </defaults>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </hyperv>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     <launchSecurity supported='yes'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       <enum name='sectype'>
Dec 10 19:44:35 compute-0 nova_compute[189279]:         <value>tdx</value>
Dec 10 19:44:35 compute-0 nova_compute[189279]:       </enum>
Dec 10 19:44:35 compute-0 nova_compute[189279]:     </launchSecurity>
Dec 10 19:44:35 compute-0 nova_compute[189279]:   </features>
Dec 10 19:44:35 compute-0 nova_compute[189279]: </domainCapabilities>
Dec 10 19:44:35 compute-0 nova_compute[189279]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.812 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.813 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.813 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.813 189283 INFO nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Secure Boot support detected
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.815 189283 INFO nova.virt.libvirt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.816 189283 INFO nova.virt.libvirt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.826 189283 DEBUG nova.virt.libvirt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.845 189283 INFO nova.virt.node [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Determined node identity fc709657-cb59-4c0b-8f54-5be8a24ab091 from /var/lib/nova/compute_id
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.860 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Verified node fc709657-cb59-4c0b-8f54-5be8a24ab091 matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Dec 10 19:44:35 compute-0 nova_compute[189279]: 2025-12-10 19:44:35.886 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 10 19:44:36 compute-0 podman[189593]: 2025-12-10 19:44:36.12943167 +0000 UTC m=+0.110878419 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 10 19:44:36 compute-0 nova_compute[189279]: 2025-12-10 19:44:36.266 189283 ERROR nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Could not retrieve compute node resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091 and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'fc709657-cb59-4c0b-8f54-5be8a24ab091' not found: No resource provider with uuid fc709657-cb59-4c0b-8f54-5be8a24ab091 found  ", "request_id": "req-6b6bcbed-dceb-4b19-bf5e-4e0f6953203d"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'fc709657-cb59-4c0b-8f54-5be8a24ab091' not found: No resource provider with uuid fc709657-cb59-4c0b-8f54-5be8a24ab091 found  ", "request_id": "req-6b6bcbed-dceb-4b19-bf5e-4e0f6953203d"}]}
Dec 10 19:44:36 compute-0 nova_compute[189279]: 2025-12-10 19:44:36.286 189283 DEBUG oslo_concurrency.lockutils [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:44:36 compute-0 nova_compute[189279]: 2025-12-10 19:44:36.286 189283 DEBUG oslo_concurrency.lockutils [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:44:36 compute-0 nova_compute[189279]: 2025-12-10 19:44:36.287 189283 DEBUG oslo_concurrency.lockutils [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:44:36 compute-0 nova_compute[189279]: 2025-12-10 19:44:36.288 189283 DEBUG nova.compute.resource_tracker [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:44:36 compute-0 nova_compute[189279]: 2025-12-10 19:44:36.452 189283 WARNING nova.virt.libvirt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:44:36 compute-0 nova_compute[189279]: 2025-12-10 19:44:36.453 189283 DEBUG nova.compute.resource_tracker [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6023MB free_disk=72.60157012939453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:44:36 compute-0 nova_compute[189279]: 2025-12-10 19:44:36.453 189283 DEBUG oslo_concurrency.lockutils [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:44:36 compute-0 nova_compute[189279]: 2025-12-10 19:44:36.453 189283 DEBUG oslo_concurrency.lockutils [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:44:36 compute-0 nova_compute[189279]: 2025-12-10 19:44:36.605 189283 ERROR nova.compute.resource_tracker [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'fc709657-cb59-4c0b-8f54-5be8a24ab091' not found: No resource provider with uuid fc709657-cb59-4c0b-8f54-5be8a24ab091 found  ", "request_id": "req-b11f1293-3fab-4144-9b7c-704816b1fc38"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'fc709657-cb59-4c0b-8f54-5be8a24ab091' not found: No resource provider with uuid fc709657-cb59-4c0b-8f54-5be8a24ab091 found  ", "request_id": "req-b11f1293-3fab-4144-9b7c-704816b1fc38"}]}
Dec 10 19:44:36 compute-0 nova_compute[189279]: 2025-12-10 19:44:36.606 189283 DEBUG nova.compute.resource_tracker [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:44:36 compute-0 nova_compute[189279]: 2025-12-10 19:44:36.606 189283 DEBUG nova.compute.resource_tracker [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:44:36 compute-0 nova_compute[189279]: 2025-12-10 19:44:36.988 189283 INFO nova.scheduler.client.report [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [req-efe73b44-7606-4c6b-881f-c09815e80cf8] Created resource provider record via placement API for resource provider with UUID fc709657-cb59-4c0b-8f54-5be8a24ab091 and name compute-0.ctlplane.example.com.
Dec 10 19:44:37 compute-0 nova_compute[189279]: 2025-12-10 19:44:37.018 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 10 19:44:37 compute-0 nova_compute[189279]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 10 19:44:37 compute-0 nova_compute[189279]: 2025-12-10 19:44:37.018 189283 INFO nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] kernel doesn't support AMD SEV
Dec 10 19:44:37 compute-0 nova_compute[189279]: 2025-12-10 19:44:37.019 189283 DEBUG nova.compute.provider_tree [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Updating inventory in ProviderTree for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 10 19:44:37 compute-0 nova_compute[189279]: 2025-12-10 19:44:37.019 189283 DEBUG nova.virt.libvirt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 19:44:37 compute-0 nova_compute[189279]: 2025-12-10 19:44:37.076 189283 DEBUG nova.scheduler.client.report [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Updated inventory for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 10 19:44:37 compute-0 nova_compute[189279]: 2025-12-10 19:44:37.077 189283 DEBUG nova.compute.provider_tree [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Updating resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 10 19:44:37 compute-0 nova_compute[189279]: 2025-12-10 19:44:37.077 189283 DEBUG nova.compute.provider_tree [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Updating inventory in ProviderTree for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 10 19:44:37 compute-0 nova_compute[189279]: 2025-12-10 19:44:37.185 189283 DEBUG nova.compute.provider_tree [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Updating resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 10 19:44:37 compute-0 nova_compute[189279]: 2025-12-10 19:44:37.210 189283 DEBUG nova.compute.resource_tracker [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:44:37 compute-0 nova_compute[189279]: 2025-12-10 19:44:37.210 189283 DEBUG oslo_concurrency.lockutils [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:44:37 compute-0 nova_compute[189279]: 2025-12-10 19:44:37.210 189283 DEBUG nova.service [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 10 19:44:37 compute-0 nova_compute[189279]: 2025-12-10 19:44:37.290 189283 DEBUG nova.service [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 10 19:44:37 compute-0 nova_compute[189279]: 2025-12-10 19:44:37.291 189283 DEBUG nova.servicegroup.drivers.db [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 10 19:44:39 compute-0 sshd-session[189620]: Accepted publickey for zuul from 192.168.122.30 port 35872 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:44:39 compute-0 systemd-logind[789]: New session 26 of user zuul.
Dec 10 19:44:39 compute-0 systemd[1]: Started Session 26 of User zuul.
Dec 10 19:44:39 compute-0 sshd-session[189620]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:44:40 compute-0 python3.9[189773]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:44:41 compute-0 sudo[189927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yapiyaibefqlxcqfjqrgnvvcekmudayz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395881.0447543-36-224068684891636/AnsiballZ_systemd_service.py'
Dec 10 19:44:41 compute-0 sudo[189927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:42 compute-0 python3.9[189929]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:44:42 compute-0 systemd[1]: Reloading.
Dec 10 19:44:42 compute-0 systemd-rc-local-generator[189949]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:44:42 compute-0 systemd-sysv-generator[189955]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:44:42 compute-0 sudo[189927]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:43 compute-0 python3.9[190114]: ansible-ansible.builtin.service_facts Invoked
Dec 10 19:44:43 compute-0 network[190131]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 10 19:44:43 compute-0 network[190132]: 'network-scripts' will be removed from distribution in near future.
Dec 10 19:44:43 compute-0 network[190133]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 10 19:44:48 compute-0 sudo[190405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iopwhopehvxytdpprgijgrpydzrhrhdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395888.3848698-55-17182943872727/AnsiballZ_systemd_service.py'
Dec 10 19:44:48 compute-0 sudo[190405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:49 compute-0 python3.9[190407]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:44:49 compute-0 sudo[190405]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:49 compute-0 sudo[190558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhprniafyhhlcggynxsczucgqmjwbxsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395889.3375251-65-2957953086766/AnsiballZ_file.py'
Dec 10 19:44:49 compute-0 sudo[190558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:50 compute-0 python3.9[190560]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:44:50 compute-0 sudo[190558]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:50 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 19:44:50 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 19:44:50 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 19:44:50 compute-0 sudo[190711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amukpwfqyagktiurdeggojlkofetimab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395890.2421389-73-128698789258314/AnsiballZ_file.py'
Dec 10 19:44:50 compute-0 sudo[190711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:50 compute-0 python3.9[190713]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:44:50 compute-0 sudo[190711]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:51 compute-0 sudo[190863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhksfbeykcopfijkjkosxrqixfwydcgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395890.9354925-82-24224447606938/AnsiballZ_command.py'
Dec 10 19:44:51 compute-0 sudo[190863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:51 compute-0 python3.9[190865]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:44:51 compute-0 sudo[190863]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:52 compute-0 python3.9[191017]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 10 19:44:52 compute-0 sudo[191167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqbwauqunxrjyycosxjgyhgzonlmvenb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395892.539018-100-62514952193567/AnsiballZ_systemd_service.py'
Dec 10 19:44:52 compute-0 sudo[191167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:53 compute-0 python3.9[191169]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:44:53 compute-0 systemd[1]: Reloading.
Dec 10 19:44:53 compute-0 systemd-rc-local-generator[191198]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:44:53 compute-0 systemd-sysv-generator[191202]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:44:53 compute-0 sudo[191167]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:54 compute-0 sudo[191354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yntczqgatpjnbhdtdfsjojdmzhxgmaoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395893.717927-108-280712279584122/AnsiballZ_command.py'
Dec 10 19:44:54 compute-0 sudo[191354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:54 compute-0 python3.9[191356]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:44:54 compute-0 sudo[191354]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:54 compute-0 sudo[191507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfkkbelfeehekfedmjnoqyngmohkcnfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395894.421652-117-59792685403084/AnsiballZ_file.py'
Dec 10 19:44:54 compute-0 sudo[191507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:54 compute-0 python3.9[191509]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:44:54 compute-0 sudo[191507]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:55 compute-0 python3.9[191659]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:44:56 compute-0 python3.9[191811]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:44:57 compute-0 python3.9[191932]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395895.9965415-133-92351907876379/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:44:57 compute-0 sudo[192095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ietgzhcrdfnuozyyrmhyyzaslqeecltm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395897.3333526-148-265940513644911/AnsiballZ_group.py'
Dec 10 19:44:57 compute-0 sudo[192095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:57 compute-0 podman[192056]: 2025-12-10 19:44:57.894908743 +0000 UTC m=+0.077987516 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 10 19:44:58 compute-0 python3.9[192101]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec 10 19:44:58 compute-0 sudo[192095]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:58 compute-0 sudo[192253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrypqzjwtfckvijoactsjwkgstkcrmbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395898.477189-159-91477916604014/AnsiballZ_getent.py'
Dec 10 19:44:58 compute-0 sudo[192253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:59 compute-0 python3.9[192255]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 10 19:44:59 compute-0 sudo[192253]: pam_unix(sudo:session): session closed for user root
Dec 10 19:44:59 compute-0 sudo[192406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqkunrqwbwwgphggwswzwtqdmwakbgir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395899.3555293-167-197601648978228/AnsiballZ_group.py'
Dec 10 19:44:59 compute-0 sudo[192406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:44:59 compute-0 python3.9[192408]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 10 19:44:59 compute-0 groupadd[192409]: group added to /etc/group: name=ceilometer, GID=42405
Dec 10 19:44:59 compute-0 groupadd[192409]: group added to /etc/gshadow: name=ceilometer
Dec 10 19:44:59 compute-0 groupadd[192409]: new group: name=ceilometer, GID=42405
Dec 10 19:44:59 compute-0 sudo[192406]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:00 compute-0 sudo[192564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjbotunahcvpilhsbecwyxkvwajjgenm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395900.094263-175-45798959330122/AnsiballZ_user.py'
Dec 10 19:45:00 compute-0 sudo[192564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:00 compute-0 python3.9[192566]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 10 19:45:00 compute-0 useradd[192568]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Dec 10 19:45:00 compute-0 useradd[192568]: add 'ceilometer' to group 'libvirt'
Dec 10 19:45:00 compute-0 useradd[192568]: add 'ceilometer' to shadow group 'libvirt'
Dec 10 19:45:01 compute-0 sudo[192564]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:01 compute-0 anacron[7483]: Job `cron.weekly' started
Dec 10 19:45:01 compute-0 anacron[7483]: Job `cron.weekly' terminated
Dec 10 19:45:02 compute-0 python3.9[192726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:02 compute-0 python3.9[192847]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765395901.6958814-201-180582146060008/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:03 compute-0 python3.9[192997]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:03 compute-0 podman[193092]: 2025-12-10 19:45:03.851804662 +0000 UTC m=+0.103885730 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 10 19:45:03 compute-0 python3.9[193129]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765395902.9233527-201-56405718039422/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:04 compute-0 python3.9[193286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:05 compute-0 python3.9[193407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765395904.1409554-201-159697324977115/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:05 compute-0 python3.9[193557]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:45:06 compute-0 podman[193683]: 2025-12-10 19:45:06.445611332 +0000 UTC m=+0.088395364 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 19:45:06 compute-0 python3.9[193722]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:45:07 compute-0 python3.9[193887]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:07 compute-0 python3.9[194008]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395906.812777-260-177057067004569/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:08 compute-0 python3.9[194158]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:08 compute-0 python3.9[194234]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:09 compute-0 python3.9[194384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:10 compute-0 python3.9[194505]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395909.0789728-260-159828362848705/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:10 compute-0 python3.9[194655]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:11 compute-0 python3.9[194776]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395910.2135174-260-146567881868332/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:11 compute-0 python3.9[194926]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:12 compute-0 python3.9[195047]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395911.4304416-260-249224103879411/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:13 compute-0 python3.9[195197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:13 compute-0 python3.9[195318]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395912.589569-260-5732742473376/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:14 compute-0 python3.9[195468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:14 compute-0 python3.9[195589]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395913.7925503-260-223336214179139/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:15 compute-0 python3.9[195739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:15 compute-0 python3.9[195860]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395914.9206882-260-265626880218124/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:16 compute-0 python3.9[196010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:17 compute-0 python3.9[196131]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395916.0939577-260-80591290385210/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:17 compute-0 python3.9[196281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:18 compute-0 python3.9[196402]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395917.2990787-260-17158670240789/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:18 compute-0 python3.9[196552]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:19 compute-0 python3.9[196673]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765395918.4157517-260-86806359573251/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:20 compute-0 python3.9[196823]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:20 compute-0 python3.9[196899]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:21 compute-0 python3.9[197049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:21 compute-0 python3.9[197125]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:22 compute-0 python3.9[197275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:22 compute-0 python3.9[197351]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:23 compute-0 sudo[197501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjvhktlvhhuivutvdlebyrsgfevuxtos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395923.0436878-449-225649556563091/AnsiballZ_file.py'
Dec 10 19:45:23 compute-0 sudo[197501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:45:23.354 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:45:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:45:23.354 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:45:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:45:23.354 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:45:23 compute-0 python3.9[197503]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:23 compute-0 sudo[197501]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:23 compute-0 sudo[197653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylpwzjqeubzzidqlxeedyjqcbfddbuil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395923.6669695-457-30085721787363/AnsiballZ_file.py'
Dec 10 19:45:23 compute-0 sudo[197653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:24 compute-0 python3.9[197655]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:24 compute-0 sudo[197653]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:24 compute-0 nova_compute[189279]: 2025-12-10 19:45:24.293 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:45:24 compute-0 nova_compute[189279]: 2025-12-10 19:45:24.322 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:45:24 compute-0 sudo[197805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyyabydejfsjpvebjqcmolgfiuyavdds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395924.3101234-465-21834715929693/AnsiballZ_file.py'
Dec 10 19:45:24 compute-0 sudo[197805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:24 compute-0 python3.9[197807]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:45:24 compute-0 sudo[197805]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:25 compute-0 sudo[197957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srkedqxopqqdqduxptwnozpuezmnuqzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395924.926175-473-25111190729784/AnsiballZ_systemd_service.py'
Dec 10 19:45:25 compute-0 sudo[197957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:25 compute-0 python3.9[197959]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:45:25 compute-0 systemd[1]: Reloading.
Dec 10 19:45:25 compute-0 systemd-rc-local-generator[197987]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:45:25 compute-0 systemd-sysv-generator[197991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:45:25 compute-0 systemd[1]: Listening on Podman API Socket.
Dec 10 19:45:25 compute-0 sudo[197957]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:26 compute-0 sudo[198148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgoavewbbisfwdquwryhiadopniachhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395926.3308327-482-7719549705760/AnsiballZ_stat.py'
Dec 10 19:45:26 compute-0 sudo[198148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:26 compute-0 python3.9[198150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:26 compute-0 sudo[198148]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:27 compute-0 sudo[198271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyzblbrgydseberxdnpehgyuuibumupw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395926.3308327-482-7719549705760/AnsiballZ_copy.py'
Dec 10 19:45:27 compute-0 sudo[198271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:27 compute-0 python3.9[198273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395926.3308327-482-7719549705760/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:45:27 compute-0 sudo[198271]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:27 compute-0 sudo[198347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqyiunuayiylinuhlesttguyhyjxmefq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395926.3308327-482-7719549705760/AnsiballZ_stat.py'
Dec 10 19:45:27 compute-0 sudo[198347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:27 compute-0 python3.9[198349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:27 compute-0 sudo[198347]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:28 compute-0 podman[198420]: 2025-12-10 19:45:28.077631374 +0000 UTC m=+0.059838963 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 10 19:45:28 compute-0 sudo[198491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoqrkgfhalmhyogyqvfgvgvadtzdkdwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395926.3308327-482-7719549705760/AnsiballZ_copy.py'
Dec 10 19:45:28 compute-0 sudo[198491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:28 compute-0 python3.9[198493]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395926.3308327-482-7719549705760/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:45:28 compute-0 sudo[198491]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:29 compute-0 sudo[198643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rscqaqdiqqmdfwuutgneaorqxxtxxouq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395928.660112-510-69423532704825/AnsiballZ_container_config_data.py'
Dec 10 19:45:29 compute-0 sudo[198643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:29 compute-0 python3.9[198645]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec 10 19:45:29 compute-0 sudo[198643]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:29 compute-0 sudo[198795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvxgluueajhnmewtqczkmczgwvznnyso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395929.4848285-519-125479561459955/AnsiballZ_container_config_hash.py'
Dec 10 19:45:29 compute-0 sudo[198795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:30 compute-0 python3.9[198797]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 10 19:45:30 compute-0 sudo[198795]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:30 compute-0 sudo[198947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayfdoudfkxcvfolftxknjkpeyuffuyfu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765395930.3542564-529-30935668742480/AnsiballZ_edpm_container_manage.py'
Dec 10 19:45:30 compute-0 sudo[198947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:31 compute-0 python3[198949]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 10 19:45:31 compute-0 podman[198987]: 2025-12-10 19:45:31.345047023 +0000 UTC m=+0.047091162 image pull 56c883f8f40c5930eb627315cd44b817f13b3afba240562a68f6f941d942bd50 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec 10 19:45:31 compute-0 podman[198987]: 2025-12-10 19:45:31.474834939 +0000 UTC m=+0.176878988 container create 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec 10 19:45:31 compute-0 python3[198949]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Dec 10 19:45:31 compute-0 sudo[198947]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:32 compute-0 sudo[199174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnpslbmwrlnosrwncgjatwhmcvbtqszj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395931.8347392-537-255443121076407/AnsiballZ_stat.py'
Dec 10 19:45:32 compute-0 sudo[199174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:32 compute-0 python3.9[199176]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:45:32 compute-0 sudo[199174]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:32 compute-0 sudo[199328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zedqvmppbyjhukwrwqbqiidybtfpijqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395932.610324-546-151281973916773/AnsiballZ_file.py'
Dec 10 19:45:32 compute-0 sudo[199328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:33 compute-0 python3.9[199330]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:33 compute-0 sudo[199328]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:33 compute-0 sudo[199479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaazammkdikornimjkzfyjwqemjxwqpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395933.171796-546-146560562133066/AnsiballZ_copy.py'
Dec 10 19:45:33 compute-0 sudo[199479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:33 compute-0 python3.9[199481]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765395933.171796-546-146560562133066/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:33 compute-0 sudo[199479]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:34 compute-0 podman[199482]: 2025-12-10 19:45:34.071582598 +0000 UTC m=+0.056282405 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.490 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.490 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.490 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.516 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.517 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.517 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.517 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.518 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.518 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.518 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.518 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.518 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.549 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.549 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.550 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.550 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:45:34 compute-0 sudo[199576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvpugsgzrtwtplievzljxwilgxzgdsho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395933.171796-546-146560562133066/AnsiballZ_systemd.py'
Dec 10 19:45:34 compute-0 sudo[199576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.712 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.715 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5994MB free_disk=72.6009292602539GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.715 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.715 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.780 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.781 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.807 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.825 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.827 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:45:34 compute-0 nova_compute[189279]: 2025-12-10 19:45:34.827 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:45:34 compute-0 python3.9[199578]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:45:34 compute-0 systemd[1]: Reloading.
Dec 10 19:45:35 compute-0 systemd-sysv-generator[199607]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:45:35 compute-0 systemd-rc-local-generator[199603]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:45:35 compute-0 sudo[199576]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:35 compute-0 sudo[199687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfvsijmptaueqvayiywfdscmzaultrxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395933.171796-546-146560562133066/AnsiballZ_systemd.py'
Dec 10 19:45:35 compute-0 sudo[199687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:36 compute-0 python3.9[199689]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:45:36 compute-0 systemd[1]: Reloading.
Dec 10 19:45:36 compute-0 systemd-rc-local-generator[199719]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:45:36 compute-0 systemd-sysv-generator[199722]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:45:36 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Dec 10 19:45:36 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec245c140b8871647a1e1b518c8d760a1346a8e535acc7840829500656ae6b6a/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 10 19:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec245c140b8871647a1e1b518c8d760a1346a8e535acc7840829500656ae6b6a/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 10 19:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec245c140b8871647a1e1b518c8d760a1346a8e535acc7840829500656ae6b6a/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 10 19:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec245c140b8871647a1e1b518c8d760a1346a8e535acc7840829500656ae6b6a/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 10 19:45:36 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1.
Dec 10 19:45:37 compute-0 podman[199729]: 2025-12-10 19:45:37.029381993 +0000 UTC m=+0.677111027 container init 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251210)
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: + sudo -E kolla_set_configs
Dec 10 19:45:37 compute-0 sudo[199761]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: sudo: unable to send audit message: Operation not permitted
Dec 10 19:45:37 compute-0 sudo[199761]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 10 19:45:37 compute-0 podman[199729]: 2025-12-10 19:45:37.06726022 +0000 UTC m=+0.714989244 container start 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Validating config file
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Copying service configuration files
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: INFO:__main__:Writing out command to execute
Dec 10 19:45:37 compute-0 sudo[199761]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: ++ cat /run_command
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: + ARGS=
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: + sudo kolla_copy_cacerts
Dec 10 19:45:37 compute-0 sudo[199776]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: sudo: unable to send audit message: Operation not permitted
Dec 10 19:45:37 compute-0 sudo[199776]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 10 19:45:37 compute-0 sudo[199776]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: + [[ ! -n '' ]]
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: + . kolla_extend_start
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: + umask 0022
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 10 19:45:37 compute-0 podman[199729]: ceilometer_agent_compute
Dec 10 19:45:37 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Dec 10 19:45:37 compute-0 sudo[199687]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:37 compute-0 podman[199762]: 2025-12-10 19:45:37.250542134 +0000 UTC m=+0.172364453 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 10 19:45:37 compute-0 systemd[1]: 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1-73ac937db0788784.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 19:45:37 compute-0 systemd[1]: 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1-73ac937db0788784.service: Failed with result 'exit-code'.
Dec 10 19:45:37 compute-0 podman[199747]: 2025-12-10 19:45:37.282894587 +0000 UTC m=+0.653922767 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 10 19:45:37 compute-0 sudo[199951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrhwkzncneimbraqeszgddibbafirjkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395937.4066646-570-242146552462883/AnsiballZ_systemd.py'
Dec 10 19:45:37 compute-0 sudo[199951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.968 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.968 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.968 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.969 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.969 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.969 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.969 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.969 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.969 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.969 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.969 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.970 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.970 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.970 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.970 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.970 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.970 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.970 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.971 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.971 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.971 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.971 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.971 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.971 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.971 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.972 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.972 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.972 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.972 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.972 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.972 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.972 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.972 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.972 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.973 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.973 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.973 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.973 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.973 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.973 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.973 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.973 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.973 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.973 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.974 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.974 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.974 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.974 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.974 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.974 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.974 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.974 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.974 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.975 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.975 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.975 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.975 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.975 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.975 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.975 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.975 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.975 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.976 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.976 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.976 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.976 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.976 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.976 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.976 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.976 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.976 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.976 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.977 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.977 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.977 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.977 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.977 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.977 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.977 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.977 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.978 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.978 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.978 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.978 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.978 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.978 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.978 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.978 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.978 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.979 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.979 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.979 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.979 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.979 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.979 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.979 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.979 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.980 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.980 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.980 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.980 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.980 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.980 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.980 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.980 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.980 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.981 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.981 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.981 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.981 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.981 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.981 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.981 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.981 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.981 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.982 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.982 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.982 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.982 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.982 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.982 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.982 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.982 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.982 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.982 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.983 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.983 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.983 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.983 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.983 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.983 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.983 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.983 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.983 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.984 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.984 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.984 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.984 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.984 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.984 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.984 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.984 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.984 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.985 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.985 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.985 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.985 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.985 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.985 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.985 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.985 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.985 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:37 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:37.985 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 10 19:45:37 compute-0 python3.9[199953]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.006 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.007 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.007 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.007 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.007 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.007 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.007 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.007 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.007 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.007 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.008 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.008 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.008 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.008 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.008 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.008 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.008 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.008 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.008 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.008 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.008 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.009 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.009 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.010 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.011 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.012 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.013 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.014 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.015 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.016 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.017 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.017 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.017 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.017 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.017 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.019 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.020 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.021 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec 10 19:45:38 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.210 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.220 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.225 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.226 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.351 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.351 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.351 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.351 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.351 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.351 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.351 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.352 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.352 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.352 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.352 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.352 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.352 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.352 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.352 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.352 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.352 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.352 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.353 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.353 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.353 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.353 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.353 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.353 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.353 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.353 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.353 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.353 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.353 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.354 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.354 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.354 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.354 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.354 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.354 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.354 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.354 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.354 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.354 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.354 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.354 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.354 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.355 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.355 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.355 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.355 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.355 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.355 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.355 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.355 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.355 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.355 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.355 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.355 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.356 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.356 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.356 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.356 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.356 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.356 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.356 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.356 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.356 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.356 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.356 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.356 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.356 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.357 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.357 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.357 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.357 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.357 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.357 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.357 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.357 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.357 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.357 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.357 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.357 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.357 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.358 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.358 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.358 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.358 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.358 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.358 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.358 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.358 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.358 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.358 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.358 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.359 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.359 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.359 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.359 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.359 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.359 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.359 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.359 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.359 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.359 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.359 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.359 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.359 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.360 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.360 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.360 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.360 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.360 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.360 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.360 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.360 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.360 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.360 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.360 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.361 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.362 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.362 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.362 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.362 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.362 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.362 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.362 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.362 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.362 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.362 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.362 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.362 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.362 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.362 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.363 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.363 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.363 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.363 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.363 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.363 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.363 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.363 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.363 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.363 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.363 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.363 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.363 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.364 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.364 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.364 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.364 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.364 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.364 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.364 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.364 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.364 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.364 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.364 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.367 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.380 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.380 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.380 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4c800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.381 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc808c4c7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.381 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc80bd3e210>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.382 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.382 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc80b38d220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.382 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4ca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.382 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4d250>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.382 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4ca70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.383 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4dc40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.383 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4c7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.383 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4dc70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.383 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4dd00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.383 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4c530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.383 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4dd30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.383 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4c560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.383 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4dd90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.383 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4c5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.383 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4de20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.383 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4c650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.383 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4e690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.383 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4deb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.384 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4c6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.384 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4df10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.384 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4cf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.384 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4c740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.384 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc809e59760>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.384 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4dfa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.384 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc808c4cfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8087ab920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.385 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc808c4c680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.386 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc808c4e660>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.386 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc808c4c9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.386 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc808c4d220>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.386 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc808c4ca40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.387 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.387 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc808c4dfd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.387 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.387 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc808c4c770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.387 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.387 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc808c4dca0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.387 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.387 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc808c4df40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.387 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.387 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc808c4f050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.387 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc808c4dd60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.388 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc808c4c470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.388 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc808c4caa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.388 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc808c4c590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.388 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc808c4ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.388 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc808c4c620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.388 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc809f038c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.389 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc808c4de80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.389 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc808c4c6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.389 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc808c4dee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.389 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc808c4cd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.389 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc808c4c710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.389 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc808c4c5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.389 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.390 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc808c4df70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.390 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.390 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc808c4cf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc809dbbd70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.390 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.645 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.746 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.746 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.746 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.746 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec 10 19:45:38 compute-0 ceilometer_agent_compute[199744]: 2025-12-10 19:45:38.759 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec 10 19:45:38 compute-0 virtqemud[188902]: End of file while reading data: Input/output error
Dec 10 19:45:38 compute-0 virtqemud[188902]: End of file while reading data: Input/output error
Dec 10 19:45:38 compute-0 systemd[1]: libpod-84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1.scope: Deactivated successfully.
Dec 10 19:45:38 compute-0 systemd[1]: libpod-84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1.scope: Consumed 1.597s CPU time.
Dec 10 19:45:38 compute-0 podman[199965]: 2025-12-10 19:45:38.993175496 +0000 UTC m=+0.936193534 container died 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210)
Dec 10 19:45:39 compute-0 systemd[1]: 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1-73ac937db0788784.timer: Deactivated successfully.
Dec 10 19:45:39 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1.
Dec 10 19:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1-userdata-shm.mount: Deactivated successfully.
Dec 10 19:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec245c140b8871647a1e1b518c8d760a1346a8e535acc7840829500656ae6b6a-merged.mount: Deactivated successfully.
Dec 10 19:45:40 compute-0 podman[199965]: 2025-12-10 19:45:39.999030555 +0000 UTC m=+1.942048583 container cleanup 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2)
Dec 10 19:45:40 compute-0 podman[199965]: ceilometer_agent_compute
Dec 10 19:45:40 compute-0 podman[200001]: ceilometer_agent_compute
Dec 10 19:45:40 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec 10 19:45:40 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Dec 10 19:45:40 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Dec 10 19:45:40 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec245c140b8871647a1e1b518c8d760a1346a8e535acc7840829500656ae6b6a/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 10 19:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec245c140b8871647a1e1b518c8d760a1346a8e535acc7840829500656ae6b6a/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 10 19:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec245c140b8871647a1e1b518c8d760a1346a8e535acc7840829500656ae6b6a/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 10 19:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec245c140b8871647a1e1b518c8d760a1346a8e535acc7840829500656ae6b6a/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 10 19:45:40 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1.
Dec 10 19:45:40 compute-0 podman[200014]: 2025-12-10 19:45:40.984401887 +0000 UTC m=+0.889725501 container init 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2)
Dec 10 19:45:40 compute-0 ceilometer_agent_compute[200029]: + sudo -E kolla_set_configs
Dec 10 19:45:41 compute-0 sudo[200035]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 10 19:45:41 compute-0 podman[200014]: 2025-12-10 19:45:41.012893045 +0000 UTC m=+0.918216669 container start 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: sudo: unable to send audit message: Operation not permitted
Dec 10 19:45:41 compute-0 sudo[200035]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Validating config file
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Copying service configuration files
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Writing out command to execute
Dec 10 19:45:41 compute-0 sudo[200035]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: ++ cat /run_command
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: + ARGS=
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: + sudo kolla_copy_cacerts
Dec 10 19:45:41 compute-0 sudo[200050]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: sudo: unable to send audit message: Operation not permitted
Dec 10 19:45:41 compute-0 sudo[200050]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 10 19:45:41 compute-0 sudo[200050]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: + [[ ! -n '' ]]
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: + . kolla_extend_start
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: + umask 0022
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 10 19:45:41 compute-0 podman[200014]: ceilometer_agent_compute
Dec 10 19:45:41 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Dec 10 19:45:41 compute-0 sudo[199951]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:41 compute-0 podman[200036]: 2025-12-10 19:45:41.264621779 +0000 UTC m=+0.245198144 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, managed_by=edpm_ansible)
Dec 10 19:45:41 compute-0 systemd[1]: 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1-5c5ceef659638b3f.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 19:45:41 compute-0 systemd[1]: 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1-5c5ceef659638b3f.service: Failed with result 'exit-code'.
Dec 10 19:45:41 compute-0 sudo[200208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfhqdlzbhpipisnptudvkkwjxbvvvbgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395941.4267511-578-176914080490997/AnsiballZ_stat.py'
Dec 10 19:45:41 compute-0 sudo[200208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.940 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.941 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.941 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.941 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.941 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.941 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.941 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.941 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.942 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.942 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.942 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.942 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.942 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.942 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.942 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.942 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.943 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.943 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.943 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.943 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.943 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.943 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.944 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.944 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.944 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.944 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.944 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.944 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.944 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.944 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.944 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.945 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.945 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.945 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.945 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.945 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.945 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.945 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.945 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.945 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.945 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.945 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.946 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.946 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.946 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.946 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.946 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.946 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.946 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.946 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.947 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.947 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.947 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.947 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.947 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.947 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.947 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.947 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.947 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.948 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.948 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.948 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.948 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.948 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.948 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.948 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.948 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.948 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.949 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.949 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.949 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.949 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.949 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.949 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.949 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.949 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.949 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.949 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.950 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.950 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.950 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.950 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.951 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.951 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.952 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.952 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.952 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.952 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.952 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.952 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.952 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.952 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.952 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.952 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.953 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.953 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.953 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.953 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.953 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.953 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.953 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.953 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.953 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.954 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.954 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.955 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.955 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.955 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.956 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.956 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.956 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.956 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.957 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.957 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.957 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.957 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.957 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 10 19:45:41 compute-0 python3.9[200210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.978 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.979 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.979 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.979 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.980 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.980 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.980 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.980 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.980 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.980 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.980 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.980 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.981 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.981 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.981 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.981 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.981 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.981 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.981 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.981 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.981 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.982 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.982 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.982 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.982 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.982 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.982 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.982 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.982 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.982 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.982 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.982 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.982 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.982 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.982 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.983 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.984 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.984 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.984 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.984 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.984 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.984 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.984 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.984 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.984 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.984 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.984 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.984 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.984 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.984 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.985 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.985 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.985 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.985 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.985 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.985 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.985 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.985 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.985 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.985 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.985 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.985 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.985 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.986 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.986 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.986 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.986 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.986 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.986 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.986 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.986 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.986 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.986 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.986 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.986 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.986 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.987 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.988 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.988 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.988 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.988 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.988 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.988 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.988 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.988 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.988 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.988 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.988 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.988 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.989 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.989 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.989 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.989 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.989 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.989 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.989 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.989 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.989 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.989 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.989 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.989 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.989 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.989 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.990 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.991 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.991 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.991 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.991 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.991 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.993 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.995 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec 10 19:45:41 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:41.995 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec 10 19:45:41 compute-0 sudo[200208]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.002 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.008 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.009 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.009 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.136 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.136 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.136 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.136 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.136 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.136 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.136 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.137 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.137 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.137 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.137 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.137 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.137 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.137 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.137 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.137 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.138 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.138 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.138 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.138 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.138 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.138 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.138 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.138 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.138 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.139 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.139 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.139 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.139 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.139 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.139 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.139 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.139 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.139 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.139 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.139 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.140 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.140 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.140 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.140 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.140 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.140 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.140 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.140 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.140 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.140 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.140 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.140 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.141 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.141 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.141 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.141 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.141 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.141 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.141 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.141 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.141 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.141 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.141 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.141 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.141 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.142 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.142 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.142 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.142 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.142 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.142 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.142 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.142 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.142 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.142 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.142 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.142 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.142 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.143 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.143 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.143 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.143 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.143 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.143 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.143 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.143 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.143 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.143 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.143 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.143 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.144 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.144 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.144 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.144 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.144 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.144 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.144 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.144 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.144 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.144 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.144 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.144 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.144 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.144 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.145 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.145 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.145 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.145 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.145 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.145 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.145 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.145 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.145 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.145 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.145 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.145 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.145 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.146 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.147 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.148 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.148 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.148 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.148 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.148 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.148 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.148 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.148 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.148 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.148 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.148 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.148 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.148 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.149 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.149 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.149 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.149 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.149 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.149 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.149 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.149 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.149 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.149 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.152 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.167 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.168 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.168 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.168 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.169 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.169 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.170 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.170 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.170 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.170 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.170 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.170 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.170 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.170 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.170 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.170 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.170 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.171 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.171 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.171 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.171 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.171 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.171 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.171 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.171 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.173 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.174 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.174 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.174 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.174 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.175 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.175 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.175 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.175 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.175 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.175 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.175 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.176 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.176 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.176 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.176 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.176 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.176 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.176 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.178 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:45:42.179 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:45:42 compute-0 sudo[200344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfknlricqtalluutgjqthvngwczjpywh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395941.4267511-578-176914080490997/AnsiballZ_copy.py'
Dec 10 19:45:42 compute-0 sudo[200344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:42 compute-0 python3.9[200346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395941.4267511-578-176914080490997/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:45:42 compute-0 sudo[200344]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:43 compute-0 sudo[200496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nksqecwoxxpzfdmvlgkiwofvcarfhioq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395942.8005548-595-41645509211959/AnsiballZ_container_config_data.py'
Dec 10 19:45:43 compute-0 sudo[200496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:43 compute-0 python3.9[200498]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec 10 19:45:43 compute-0 sudo[200496]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:43 compute-0 sudo[200648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oniznqyntfpyunbjxbuuqzxrbxwmiuep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395943.730229-604-21340662463846/AnsiballZ_container_config_hash.py'
Dec 10 19:45:43 compute-0 sudo[200648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:44 compute-0 python3.9[200650]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 10 19:45:44 compute-0 sudo[200648]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:44 compute-0 sudo[200800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxkthlzvtxqhisxjefcchafnrgosmzqa ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765395944.5014546-614-179344782691746/AnsiballZ_edpm_container_manage.py'
Dec 10 19:45:44 compute-0 sudo[200800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:45 compute-0 python3[200802]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 10 19:45:45 compute-0 podman[200838]: 2025-12-10 19:45:45.198624943 +0000 UTC m=+0.021607037 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec 10 19:45:46 compute-0 podman[200838]: 2025-12-10 19:45:46.533619515 +0000 UTC m=+1.356601589 container create 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm)
Dec 10 19:45:46 compute-0 python3[200802]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec 10 19:45:46 compute-0 sudo[200800]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:47 compute-0 sudo[201026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdshzudykqhhallxfqnvgknktbyrnrsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395946.8481977-622-152062289698811/AnsiballZ_stat.py'
Dec 10 19:45:47 compute-0 sudo[201026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:47 compute-0 python3.9[201028]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:45:47 compute-0 sudo[201026]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:47 compute-0 sudo[201180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djnylouxkfivcuawkmllubnvuithtsns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395947.5228193-631-138274281262331/AnsiballZ_file.py'
Dec 10 19:45:47 compute-0 sudo[201180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:48 compute-0 python3.9[201182]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:48 compute-0 sudo[201180]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:48 compute-0 sudo[201331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqnuxjectigxjniyaawnwdlkdpsrdywn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395948.078073-631-234809100900772/AnsiballZ_copy.py'
Dec 10 19:45:48 compute-0 sudo[201331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:48 compute-0 python3.9[201333]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765395948.078073-631-234809100900772/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:45:48 compute-0 sudo[201331]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:49 compute-0 sudo[201407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agddjmmiscdgqwsqguzubwvucrdanzxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395948.078073-631-234809100900772/AnsiballZ_systemd.py'
Dec 10 19:45:49 compute-0 sudo[201407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:49 compute-0 python3.9[201409]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:45:49 compute-0 systemd[1]: Reloading.
Dec 10 19:45:49 compute-0 systemd-rc-local-generator[201434]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:45:49 compute-0 systemd-sysv-generator[201438]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:45:49 compute-0 sudo[201407]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:49 compute-0 sudo[201518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udhmswvnhlvckxiwekmoansqmjuiljyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395948.078073-631-234809100900772/AnsiballZ_systemd.py'
Dec 10 19:45:49 compute-0 sudo[201518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:50 compute-0 python3.9[201520]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:45:50 compute-0 systemd[1]: Reloading.
Dec 10 19:45:50 compute-0 systemd-rc-local-generator[201546]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:45:50 compute-0 systemd-sysv-generator[201549]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:45:50 compute-0 systemd[1]: Starting node_exporter container...
Dec 10 19:45:50 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5415c87daaf2543f98c86337a039b842c7b64057bd636016ec776a6d01b78ae3/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 10 19:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5415c87daaf2543f98c86337a039b842c7b64057bd636016ec776a6d01b78ae3/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 10 19:45:50 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f.
Dec 10 19:45:50 compute-0 podman[201560]: 2025-12-10 19:45:50.997933109 +0000 UTC m=+0.388040761 container init 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.012Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.012Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.012Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.013Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.013Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.013Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=arp
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=bcache
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=bonding
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=cpu
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=edac
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=filefd
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=netclass
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=netdev
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=netstat
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=nfs
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=nvme
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=softnet
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=systemd
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=xfs
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.014Z caller=node_exporter.go:117 level=info collector=zfs
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.015Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 10 19:45:51 compute-0 node_exporter[201575]: ts=2025-12-10T19:45:51.015Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec 10 19:45:51 compute-0 podman[201560]: 2025-12-10 19:45:51.023724732 +0000 UTC m=+0.413832354 container start 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 19:45:51 compute-0 podman[201560]: node_exporter
Dec 10 19:45:51 compute-0 systemd[1]: Started node_exporter container.
Dec 10 19:45:51 compute-0 sudo[201518]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:51 compute-0 podman[201584]: 2025-12-10 19:45:51.2209159 +0000 UTC m=+0.185365053 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:45:51 compute-0 sudo[201758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhpenknduwgwpreirxigsjonhqhxwtdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395951.3567495-655-81029315574025/AnsiballZ_systemd.py'
Dec 10 19:45:51 compute-0 sudo[201758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:51 compute-0 python3.9[201760]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:45:52 compute-0 systemd[1]: Stopping node_exporter container...
Dec 10 19:45:52 compute-0 systemd[1]: libpod-22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f.scope: Deactivated successfully.
Dec 10 19:45:52 compute-0 podman[201764]: 2025-12-10 19:45:52.203617478 +0000 UTC m=+0.126248258 container died 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 19:45:52 compute-0 systemd[1]: 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f-6ef511b1959cf0ec.timer: Deactivated successfully.
Dec 10 19:45:52 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f.
Dec 10 19:45:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f-userdata-shm.mount: Deactivated successfully.
Dec 10 19:45:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5415c87daaf2543f98c86337a039b842c7b64057bd636016ec776a6d01b78ae3-merged.mount: Deactivated successfully.
Dec 10 19:45:52 compute-0 podman[201764]: 2025-12-10 19:45:52.411643365 +0000 UTC m=+0.334274145 container cleanup 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:45:52 compute-0 podman[201764]: node_exporter
Dec 10 19:45:52 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 10 19:45:52 compute-0 podman[201791]: node_exporter
Dec 10 19:45:52 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec 10 19:45:52 compute-0 systemd[1]: Stopped node_exporter container.
Dec 10 19:45:52 compute-0 systemd[1]: Starting node_exporter container...
Dec 10 19:45:52 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5415c87daaf2543f98c86337a039b842c7b64057bd636016ec776a6d01b78ae3/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 10 19:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5415c87daaf2543f98c86337a039b842c7b64057bd636016ec776a6d01b78ae3/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 10 19:45:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f.
Dec 10 19:45:52 compute-0 podman[201804]: 2025-12-10 19:45:52.747900475 +0000 UTC m=+0.230490798 container init 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.760Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.760Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.760Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=arp
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=bcache
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=bonding
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=cpu
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=edac
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=filefd
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=netclass
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=netdev
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=netstat
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=nfs
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=nvme
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=softnet
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=systemd
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.761Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.762Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.762Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.762Z caller=node_exporter.go:117 level=info collector=xfs
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.762Z caller=node_exporter.go:117 level=info collector=zfs
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.762Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 10 19:45:52 compute-0 node_exporter[201820]: ts=2025-12-10T19:45:52.762Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec 10 19:45:52 compute-0 podman[201804]: 2025-12-10 19:45:52.776566467 +0000 UTC m=+0.259156690 container start 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 19:45:52 compute-0 podman[201804]: node_exporter
Dec 10 19:45:52 compute-0 systemd[1]: Started node_exporter container.
Dec 10 19:45:52 compute-0 sudo[201758]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:52 compute-0 podman[201829]: 2025-12-10 19:45:52.918501938 +0000 UTC m=+0.132571914 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 19:45:53 compute-0 sudo[202003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjnjtgympwfitzvyqjzfffmeajijuyft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395953.0499215-663-176688443054456/AnsiballZ_stat.py'
Dec 10 19:45:53 compute-0 sudo[202003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:53 compute-0 python3.9[202005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:45:53 compute-0 sudo[202003]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:53 compute-0 auditd[700]: Audit daemon rotating log files
Dec 10 19:45:54 compute-0 sudo[202126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fybromluxvfpywvnolenznyaunbopooy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395953.0499215-663-176688443054456/AnsiballZ_copy.py'
Dec 10 19:45:54 compute-0 sudo[202126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:54 compute-0 python3.9[202128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395953.0499215-663-176688443054456/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:45:54 compute-0 sudo[202126]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:54 compute-0 sudo[202278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zljnyspxxfmyskqlrghhqcrbyqmkreml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395954.4850388-680-44806571174964/AnsiballZ_container_config_data.py'
Dec 10 19:45:54 compute-0 sudo[202278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:55 compute-0 python3.9[202280]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec 10 19:45:55 compute-0 sudo[202278]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:55 compute-0 sudo[202430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayrjoiegmwwtypaqtwglhmrbfyjuqhkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395955.2408679-689-31116620512443/AnsiballZ_container_config_hash.py'
Dec 10 19:45:55 compute-0 sudo[202430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:55 compute-0 python3.9[202432]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 10 19:45:55 compute-0 sudo[202430]: pam_unix(sudo:session): session closed for user root
Dec 10 19:45:56 compute-0 sudo[202582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msqjjenupnsfkxvsuhezaioibndkyfwr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765395956.0814116-699-73102428004293/AnsiballZ_edpm_container_manage.py'
Dec 10 19:45:56 compute-0 sudo[202582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:45:56 compute-0 python3[202584]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 10 19:45:59 compute-0 podman[202641]: 2025-12-10 19:45:59.021304999 +0000 UTC m=+0.281874599 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 10 19:45:59 compute-0 podman[202598]: 2025-12-10 19:45:59.361662852 +0000 UTC m=+2.564248934 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec 10 19:45:59 compute-0 podman[202714]: 2025-12-10 19:45:59.547393012 +0000 UTC m=+0.082153200 container create e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter)
Dec 10 19:45:59 compute-0 podman[202714]: 2025-12-10 19:45:59.490703826 +0000 UTC m=+0.025463994 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec 10 19:45:59 compute-0 python3[202584]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec 10 19:45:59 compute-0 sudo[202582]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:00 compute-0 sudo[202903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdcmxihbjvwxlfsmifjglxhctvqhollz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395959.8799975-707-19093414309851/AnsiballZ_stat.py'
Dec 10 19:46:00 compute-0 sudo[202903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:00 compute-0 python3.9[202905]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:46:00 compute-0 sudo[202903]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:01 compute-0 sudo[203057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npbojsqiasyrnwnmwlxglaqrfspuvmno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395960.7133179-716-68895483406502/AnsiballZ_file.py'
Dec 10 19:46:01 compute-0 sudo[203057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:01 compute-0 python3.9[203059]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:01 compute-0 sudo[203057]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:01 compute-0 sudo[203208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elsqijpkfalryvzvaopcfzuzgexuzcjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395961.3849149-716-4849034949433/AnsiballZ_copy.py'
Dec 10 19:46:01 compute-0 sudo[203208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:02 compute-0 python3.9[203210]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765395961.3849149-716-4849034949433/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:02 compute-0 sudo[203208]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:02 compute-0 sudo[203284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbdkbeihmnoeaxzrajggsgyssomddaut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395961.3849149-716-4849034949433/AnsiballZ_systemd.py'
Dec 10 19:46:02 compute-0 sudo[203284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:02 compute-0 python3.9[203286]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:46:02 compute-0 systemd[1]: Reloading.
Dec 10 19:46:02 compute-0 systemd-sysv-generator[203317]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:46:02 compute-0 systemd-rc-local-generator[203314]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:46:03 compute-0 sudo[203284]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:03 compute-0 sudo[203396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-facffslxjnguucezwtbrgjvpdcdldvtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395961.3849149-716-4849034949433/AnsiballZ_systemd.py'
Dec 10 19:46:03 compute-0 sudo[203396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:03 compute-0 python3.9[203398]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:46:03 compute-0 systemd[1]: Reloading.
Dec 10 19:46:03 compute-0 systemd-rc-local-generator[203426]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:46:03 compute-0 systemd-sysv-generator[203429]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:46:04 compute-0 systemd[1]: Starting podman_exporter container...
Dec 10 19:46:04 compute-0 podman[203437]: 2025-12-10 19:46:04.31148192 +0000 UTC m=+0.066410261 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Dec 10 19:46:04 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2cb9235fceee74505c4acf00a87e727db91b2f6fb0385ad9ccabad691c1768/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 10 19:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2cb9235fceee74505c4acf00a87e727db91b2f6fb0385ad9ccabad691c1768/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 10 19:46:04 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56.
Dec 10 19:46:04 compute-0 podman[203439]: 2025-12-10 19:46:04.381700983 +0000 UTC m=+0.128026821 container init e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 19:46:04 compute-0 podman_exporter[203473]: ts=2025-12-10T19:46:04.395Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 10 19:46:04 compute-0 podman_exporter[203473]: ts=2025-12-10T19:46:04.395Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 10 19:46:04 compute-0 podman_exporter[203473]: ts=2025-12-10T19:46:04.395Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 10 19:46:04 compute-0 podman_exporter[203473]: ts=2025-12-10T19:46:04.395Z caller=handler.go:105 level=info collector=container
Dec 10 19:46:04 compute-0 podman[203439]: 2025-12-10 19:46:04.40306034 +0000 UTC m=+0.149386158 container start e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 19:46:04 compute-0 podman[203439]: podman_exporter
Dec 10 19:46:04 compute-0 systemd[1]: Starting Podman API Service...
Dec 10 19:46:04 compute-0 systemd[1]: Started Podman API Service.
Dec 10 19:46:04 compute-0 systemd[1]: Started podman_exporter container.
Dec 10 19:46:04 compute-0 podman[203484]: time="2025-12-10T19:46:04Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec 10 19:46:04 compute-0 podman[203484]: time="2025-12-10T19:46:04Z" level=info msg="Setting parallel job count to 25"
Dec 10 19:46:04 compute-0 podman[203484]: time="2025-12-10T19:46:04Z" level=info msg="Using sqlite as database backend"
Dec 10 19:46:04 compute-0 podman[203484]: time="2025-12-10T19:46:04Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec 10 19:46:04 compute-0 podman[203484]: time="2025-12-10T19:46:04Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec 10 19:46:04 compute-0 podman[203484]: time="2025-12-10T19:46:04Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec 10 19:46:04 compute-0 sudo[203396]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:04 compute-0 podman[203484]: @ - - [10/Dec/2025:19:46:04 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 10 19:46:04 compute-0 podman[203484]: time="2025-12-10T19:46:04Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:46:04 compute-0 podman[203482]: 2025-12-10 19:46:04.472627848 +0000 UTC m=+0.058040813 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 19:46:04 compute-0 systemd[1]: e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56-475c68f145f98930.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 19:46:04 compute-0 systemd[1]: e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56-475c68f145f98930.service: Failed with result 'exit-code'.
Dec 10 19:46:04 compute-0 podman[203484]: @ - - [10/Dec/2025:19:46:04 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19587 "" "Go-http-client/1.1"
Dec 10 19:46:04 compute-0 podman_exporter[203473]: ts=2025-12-10T19:46:04.493Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 10 19:46:04 compute-0 podman_exporter[203473]: ts=2025-12-10T19:46:04.494Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 10 19:46:04 compute-0 podman_exporter[203473]: ts=2025-12-10T19:46:04.494Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec 10 19:46:04 compute-0 sudo[203670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uofqtfuixrdayvbruedotljmbbvuhiev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395964.6195142-740-164969893063839/AnsiballZ_systemd.py'
Dec 10 19:46:04 compute-0 sudo[203670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:05 compute-0 python3.9[203672]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:46:05 compute-0 systemd[1]: Stopping podman_exporter container...
Dec 10 19:46:05 compute-0 podman[203484]: @ - - [10/Dec/2025:19:46:04 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Dec 10 19:46:05 compute-0 systemd[1]: libpod-e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56.scope: Deactivated successfully.
Dec 10 19:46:05 compute-0 podman[203676]: 2025-12-10 19:46:05.460343326 +0000 UTC m=+0.193803066 container died e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 19:46:05 compute-0 systemd[1]: e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56-475c68f145f98930.timer: Deactivated successfully.
Dec 10 19:46:05 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56.
Dec 10 19:46:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56-userdata-shm.mount: Deactivated successfully.
Dec 10 19:46:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a2cb9235fceee74505c4acf00a87e727db91b2f6fb0385ad9ccabad691c1768-merged.mount: Deactivated successfully.
Dec 10 19:46:06 compute-0 podman[203676]: 2025-12-10 19:46:06.012809987 +0000 UTC m=+0.746269717 container cleanup e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 19:46:06 compute-0 podman[203676]: podman_exporter
Dec 10 19:46:06 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 10 19:46:06 compute-0 podman[203704]: podman_exporter
Dec 10 19:46:06 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec 10 19:46:06 compute-0 systemd[1]: Stopped podman_exporter container.
Dec 10 19:46:06 compute-0 systemd[1]: Starting podman_exporter container...
Dec 10 19:46:06 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2cb9235fceee74505c4acf00a87e727db91b2f6fb0385ad9ccabad691c1768/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 10 19:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2cb9235fceee74505c4acf00a87e727db91b2f6fb0385ad9ccabad691c1768/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 10 19:46:06 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56.
Dec 10 19:46:06 compute-0 podman[203717]: 2025-12-10 19:46:06.376764833 +0000 UTC m=+0.274943049 container init e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 19:46:06 compute-0 podman_exporter[203733]: ts=2025-12-10T19:46:06.389Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 10 19:46:06 compute-0 podman_exporter[203733]: ts=2025-12-10T19:46:06.390Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 10 19:46:06 compute-0 podman_exporter[203733]: ts=2025-12-10T19:46:06.390Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 10 19:46:06 compute-0 podman_exporter[203733]: ts=2025-12-10T19:46:06.390Z caller=handler.go:105 level=info collector=container
Dec 10 19:46:06 compute-0 podman[203484]: @ - - [10/Dec/2025:19:46:06 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 10 19:46:06 compute-0 podman[203484]: time="2025-12-10T19:46:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:46:06 compute-0 podman[203717]: 2025-12-10 19:46:06.398048048 +0000 UTC m=+0.296226264 container start e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 19:46:06 compute-0 podman[203717]: podman_exporter
Dec 10 19:46:06 compute-0 systemd[1]: Started podman_exporter container.
Dec 10 19:46:06 compute-0 sudo[203670]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:06 compute-0 podman[203484]: @ - - [10/Dec/2025:19:46:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19589 "" "Go-http-client/1.1"
Dec 10 19:46:06 compute-0 podman_exporter[203733]: ts=2025-12-10T19:46:06.595Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 10 19:46:06 compute-0 podman_exporter[203733]: ts=2025-12-10T19:46:06.596Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 10 19:46:06 compute-0 podman_exporter[203733]: ts=2025-12-10T19:46:06.597Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec 10 19:46:06 compute-0 podman[203743]: 2025-12-10 19:46:06.601451631 +0000 UTC m=+0.194614726 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 19:46:06 compute-0 systemd[1]: e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56-4ca451ae11bd34bd.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 19:46:06 compute-0 systemd[1]: e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56-4ca451ae11bd34bd.service: Failed with result 'exit-code'.
Dec 10 19:46:07 compute-0 sudo[203914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffrxanbfuajguopisgwbwypvbonhasib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395966.7354388-748-225127945296819/AnsiballZ_stat.py'
Dec 10 19:46:07 compute-0 sudo[203914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:07 compute-0 python3.9[203916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:46:07 compute-0 sudo[203914]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:07 compute-0 sudo[204047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbvemhffcjzvxkfrnjcieiwqcuttrbfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395966.7354388-748-225127945296819/AnsiballZ_copy.py'
Dec 10 19:46:07 compute-0 sudo[204047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:07 compute-0 podman[204011]: 2025-12-10 19:46:07.686694126 +0000 UTC m=+0.081961845 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 10 19:46:07 compute-0 python3.9[204055]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765395966.7354388-748-225127945296819/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:46:07 compute-0 sudo[204047]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:08 compute-0 sudo[204212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzsdxspwbjscmaawajtvonwutqenaevw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395968.1393213-765-265689956860631/AnsiballZ_container_config_data.py'
Dec 10 19:46:08 compute-0 sudo[204212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:08 compute-0 python3.9[204214]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec 10 19:46:08 compute-0 sudo[204212]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:09 compute-0 sudo[204364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noxfoeitghpgbxvdpzfsnvjnxzfcufjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395968.821087-774-72373199170567/AnsiballZ_container_config_hash.py'
Dec 10 19:46:09 compute-0 sudo[204364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:09 compute-0 python3.9[204366]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 10 19:46:09 compute-0 sudo[204364]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:09 compute-0 sudo[204516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykvabfduafkolihvutrdgmcugfcssxkb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765395969.6089242-784-20137479670049/AnsiballZ_edpm_container_manage.py'
Dec 10 19:46:09 compute-0 sudo[204516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:10 compute-0 python3[204518]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 10 19:46:13 compute-0 podman[204557]: 2025-12-10 19:46:13.300970325 +0000 UTC m=+1.237290810 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251210)
Dec 10 19:46:13 compute-0 systemd[1]: 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1-5c5ceef659638b3f.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 19:46:13 compute-0 systemd[1]: 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1-5c5ceef659638b3f.service: Failed with result 'exit-code'.
Dec 10 19:46:13 compute-0 podman[204530]: 2025-12-10 19:46:13.613547173 +0000 UTC m=+3.427402275 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 10 19:46:13 compute-0 podman[204645]: 2025-12-10 19:46:13.783644763 +0000 UTC m=+0.056775993 container create d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, version=9.6, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 10 19:46:13 compute-0 podman[204645]: 2025-12-10 19:46:13.76004947 +0000 UTC m=+0.033180720 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 10 19:46:13 compute-0 python3[204518]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 10 19:46:13 compute-0 sudo[204516]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:14 compute-0 sudo[204833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnbmhtnzxsdmvfqvritdcndicgxaiwjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395974.101913-792-126935449444911/AnsiballZ_stat.py'
Dec 10 19:46:14 compute-0 sudo[204833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:14 compute-0 python3.9[204835]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:46:14 compute-0 sudo[204833]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:15 compute-0 sudo[204987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqifoziurztvbyhpyrlpmyjwvmzfxogm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395974.7858882-801-93793545816611/AnsiballZ_file.py'
Dec 10 19:46:15 compute-0 sudo[204987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:15 compute-0 python3.9[204989]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:15 compute-0 sudo[204987]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:15 compute-0 sudo[205138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdvfdoxecugydrpsqpbhajubsyxdutnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395975.3161676-801-20910634222791/AnsiballZ_copy.py'
Dec 10 19:46:15 compute-0 sudo[205138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:15 compute-0 python3.9[205140]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765395975.3161676-801-20910634222791/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:15 compute-0 sudo[205138]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:16 compute-0 sudo[205214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfeymrhgbzuemznhvfstgsqjltwibgnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395975.3161676-801-20910634222791/AnsiballZ_systemd.py'
Dec 10 19:46:16 compute-0 sudo[205214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:16 compute-0 python3.9[205216]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:46:16 compute-0 systemd[1]: Reloading.
Dec 10 19:46:16 compute-0 systemd-sysv-generator[205249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:46:16 compute-0 systemd-rc-local-generator[205245]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:46:16 compute-0 sudo[205214]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:16 compute-0 rsyslogd[1003]: imjournal from <np0005554310:sudo>: begin to drop messages due to rate-limiting
Dec 10 19:46:17 compute-0 sudo[205326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaefxssbsobpeeucjgvdyfkzyardbwfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395975.3161676-801-20910634222791/AnsiballZ_systemd.py'
Dec 10 19:46:17 compute-0 sudo[205326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:17 compute-0 python3.9[205328]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:46:17 compute-0 systemd[1]: Reloading.
Dec 10 19:46:17 compute-0 systemd-rc-local-generator[205352]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:46:17 compute-0 systemd-sysv-generator[205357]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:46:17 compute-0 systemd[1]: Starting openstack_network_exporter container...
Dec 10 19:46:18 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2936d33b8edc5952e67f5a1de8ca8bc097a4ef2f987ce8209f112a79ec9577f9/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 10 19:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2936d33b8edc5952e67f5a1de8ca8bc097a4ef2f987ce8209f112a79ec9577f9/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 10 19:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2936d33b8edc5952e67f5a1de8ca8bc097a4ef2f987ce8209f112a79ec9577f9/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 10 19:46:18 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7.
Dec 10 19:46:18 compute-0 podman[205368]: 2025-12-10 19:46:18.115364925 +0000 UTC m=+0.135515107 container init d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 19:46:18 compute-0 openstack_network_exporter[205384]: INFO    19:46:18 main.go:48: registering *bridge.Collector
Dec 10 19:46:18 compute-0 openstack_network_exporter[205384]: INFO    19:46:18 main.go:48: registering *coverage.Collector
Dec 10 19:46:18 compute-0 openstack_network_exporter[205384]: INFO    19:46:18 main.go:48: registering *datapath.Collector
Dec 10 19:46:18 compute-0 openstack_network_exporter[205384]: INFO    19:46:18 main.go:48: registering *iface.Collector
Dec 10 19:46:18 compute-0 openstack_network_exporter[205384]: INFO    19:46:18 main.go:48: registering *memory.Collector
Dec 10 19:46:18 compute-0 openstack_network_exporter[205384]: INFO    19:46:18 main.go:48: registering *ovnnorthd.Collector
Dec 10 19:46:18 compute-0 openstack_network_exporter[205384]: INFO    19:46:18 main.go:48: registering *ovn.Collector
Dec 10 19:46:18 compute-0 openstack_network_exporter[205384]: INFO    19:46:18 main.go:48: registering *ovsdbserver.Collector
Dec 10 19:46:18 compute-0 openstack_network_exporter[205384]: INFO    19:46:18 main.go:48: registering *pmd_perf.Collector
Dec 10 19:46:18 compute-0 openstack_network_exporter[205384]: INFO    19:46:18 main.go:48: registering *pmd_rxq.Collector
Dec 10 19:46:18 compute-0 openstack_network_exporter[205384]: INFO    19:46:18 main.go:48: registering *vswitch.Collector
Dec 10 19:46:18 compute-0 openstack_network_exporter[205384]: NOTICE  19:46:18 main.go:76: listening on https://:9105/metrics
Dec 10 19:46:18 compute-0 podman[205368]: 2025-12-10 19:46:18.145951301 +0000 UTC m=+0.166101483 container start d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public)
Dec 10 19:46:18 compute-0 podman[205368]: openstack_network_exporter
Dec 10 19:46:18 compute-0 systemd[1]: Started openstack_network_exporter container.
Dec 10 19:46:18 compute-0 sudo[205326]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:18 compute-0 podman[205393]: 2025-12-10 19:46:18.247056877 +0000 UTC m=+0.093248114 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec 10 19:46:18 compute-0 sudo[205566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brlrxyuwkfvfumasequrapcnavvvsxro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395978.4012947-825-31066265704484/AnsiballZ_systemd.py'
Dec 10 19:46:18 compute-0 sudo[205566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:19 compute-0 python3.9[205568]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:46:19 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Dec 10 19:46:19 compute-0 systemd[1]: libpod-d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7.scope: Deactivated successfully.
Dec 10 19:46:19 compute-0 podman[205572]: 2025-12-10 19:46:19.171126553 +0000 UTC m=+0.075321201 container died d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 10 19:46:19 compute-0 systemd[1]: d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7-5f0b3f6e686c4460.timer: Deactivated successfully.
Dec 10 19:46:19 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7.
Dec 10 19:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7-userdata-shm.mount: Deactivated successfully.
Dec 10 19:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2936d33b8edc5952e67f5a1de8ca8bc097a4ef2f987ce8209f112a79ec9577f9-merged.mount: Deactivated successfully.
Dec 10 19:46:20 compute-0 podman[205572]: 2025-12-10 19:46:20.311010307 +0000 UTC m=+1.215204945 container cleanup d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 10 19:46:20 compute-0 podman[205572]: openstack_network_exporter
Dec 10 19:46:20 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 10 19:46:20 compute-0 podman[205601]: openstack_network_exporter
Dec 10 19:46:20 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec 10 19:46:20 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Dec 10 19:46:20 compute-0 systemd[1]: Starting openstack_network_exporter container...
Dec 10 19:46:20 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2936d33b8edc5952e67f5a1de8ca8bc097a4ef2f987ce8209f112a79ec9577f9/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 10 19:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2936d33b8edc5952e67f5a1de8ca8bc097a4ef2f987ce8209f112a79ec9577f9/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 10 19:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2936d33b8edc5952e67f5a1de8ca8bc097a4ef2f987ce8209f112a79ec9577f9/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 10 19:46:20 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7.
Dec 10 19:46:20 compute-0 podman[205615]: 2025-12-10 19:46:20.520780567 +0000 UTC m=+0.122630350 container init d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 10 19:46:20 compute-0 openstack_network_exporter[205632]: INFO    19:46:20 main.go:48: registering *bridge.Collector
Dec 10 19:46:20 compute-0 openstack_network_exporter[205632]: INFO    19:46:20 main.go:48: registering *coverage.Collector
Dec 10 19:46:20 compute-0 openstack_network_exporter[205632]: INFO    19:46:20 main.go:48: registering *datapath.Collector
Dec 10 19:46:20 compute-0 openstack_network_exporter[205632]: INFO    19:46:20 main.go:48: registering *iface.Collector
Dec 10 19:46:20 compute-0 openstack_network_exporter[205632]: INFO    19:46:20 main.go:48: registering *memory.Collector
Dec 10 19:46:20 compute-0 openstack_network_exporter[205632]: INFO    19:46:20 main.go:48: registering *ovnnorthd.Collector
Dec 10 19:46:20 compute-0 openstack_network_exporter[205632]: INFO    19:46:20 main.go:48: registering *ovn.Collector
Dec 10 19:46:20 compute-0 openstack_network_exporter[205632]: INFO    19:46:20 main.go:48: registering *ovsdbserver.Collector
Dec 10 19:46:20 compute-0 openstack_network_exporter[205632]: INFO    19:46:20 main.go:48: registering *pmd_perf.Collector
Dec 10 19:46:20 compute-0 openstack_network_exporter[205632]: INFO    19:46:20 main.go:48: registering *pmd_rxq.Collector
Dec 10 19:46:20 compute-0 openstack_network_exporter[205632]: INFO    19:46:20 main.go:48: registering *vswitch.Collector
Dec 10 19:46:20 compute-0 openstack_network_exporter[205632]: NOTICE  19:46:20 main.go:76: listening on https://:9105/metrics
Dec 10 19:46:20 compute-0 podman[205615]: 2025-12-10 19:46:20.54966118 +0000 UTC m=+0.151510963 container start d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 10 19:46:20 compute-0 podman[205615]: openstack_network_exporter
Dec 10 19:46:20 compute-0 systemd[1]: Started openstack_network_exporter container.
Dec 10 19:46:20 compute-0 sudo[205566]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:20 compute-0 podman[205642]: 2025-12-10 19:46:20.643525448 +0000 UTC m=+0.085357389 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Dec 10 19:46:21 compute-0 sudo[205813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbopdqdcvslqtumyjznqlyvhwczynsmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395980.7749133-833-166210348434566/AnsiballZ_find.py'
Dec 10 19:46:21 compute-0 sudo[205813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:21 compute-0 python3.9[205815]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 10 19:46:21 compute-0 sudo[205813]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:22 compute-0 sudo[205965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwzbwfpfzrhnumafozrlhryanbtfvixj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395981.7384043-843-137921546697612/AnsiballZ_podman_container_info.py'
Dec 10 19:46:22 compute-0 sudo[205965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:22 compute-0 python3.9[205967]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 10 19:46:22 compute-0 sudo[205965]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:23 compute-0 sudo[206143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmdttakxiombgcnfcmimlumgtrviaacd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395982.629615-851-276798717751747/AnsiballZ_podman_container_exec.py'
Dec 10 19:46:23 compute-0 sudo[206143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:23 compute-0 podman[206104]: 2025-12-10 19:46:23.06966991 +0000 UTC m=+0.058480945 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:46:23 compute-0 python3.9[206156]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:46:23 compute-0 systemd[1]: Started libpod-conmon-9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17.scope.
Dec 10 19:46:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:46:23.355 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:46:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:46:23.355 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:46:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:46:23.355 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:46:23 compute-0 podman[206157]: 2025-12-10 19:46:23.364301975 +0000 UTC m=+0.089757507 container exec 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 10 19:46:23 compute-0 podman[206157]: 2025-12-10 19:46:23.398952871 +0000 UTC m=+0.124408323 container exec_died 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 10 19:46:23 compute-0 systemd[1]: libpod-conmon-9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17.scope: Deactivated successfully.
Dec 10 19:46:23 compute-0 sudo[206143]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:23 compute-0 sudo[206339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hppuusgirhhzgblhwdrmjbvfbwdzfeur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395983.6228483-859-158777231969638/AnsiballZ_podman_container_exec.py'
Dec 10 19:46:23 compute-0 sudo[206339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:24 compute-0 python3.9[206341]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:46:24 compute-0 systemd[1]: Started libpod-conmon-9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17.scope.
Dec 10 19:46:24 compute-0 podman[206342]: 2025-12-10 19:46:24.179249746 +0000 UTC m=+0.072893181 container exec 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:46:24 compute-0 podman[206342]: 2025-12-10 19:46:24.214130117 +0000 UTC m=+0.107773532 container exec_died 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 10 19:46:24 compute-0 systemd[1]: libpod-conmon-9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17.scope: Deactivated successfully.
Dec 10 19:46:24 compute-0 sudo[206339]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:24 compute-0 sudo[206524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjdpjijnixyqifdqdsnqpxdipvgtfhol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395984.3972087-867-276409981920757/AnsiballZ_file.py'
Dec 10 19:46:24 compute-0 sudo[206524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:24 compute-0 python3.9[206526]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:24 compute-0 sudo[206524]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:25 compute-0 sudo[206676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrleinkxoltilqbypvdffhltjakbggrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395984.9893472-876-16548078620068/AnsiballZ_podman_container_info.py'
Dec 10 19:46:25 compute-0 sudo[206676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:25 compute-0 python3.9[206678]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec 10 19:46:25 compute-0 sudo[206676]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:25 compute-0 sudo[206841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gizzmbwzpaelzzpatcxnvbegcvdvsqda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395985.6857743-884-144315429641/AnsiballZ_podman_container_exec.py'
Dec 10 19:46:25 compute-0 sudo[206841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:26 compute-0 python3.9[206843]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:46:26 compute-0 systemd[1]: Started libpod-conmon-6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69.scope.
Dec 10 19:46:26 compute-0 podman[206844]: 2025-12-10 19:46:26.264444991 +0000 UTC m=+0.080867178 container exec 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 10 19:46:26 compute-0 podman[206844]: 2025-12-10 19:46:26.299074686 +0000 UTC m=+0.115496903 container exec_died 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 10 19:46:26 compute-0 systemd[1]: libpod-conmon-6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69.scope: Deactivated successfully.
Dec 10 19:46:26 compute-0 sudo[206841]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:26 compute-0 sudo[207025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kixbksuijbyjhdtjkfiphwpepsmjacip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395986.499442-892-83514981257553/AnsiballZ_podman_container_exec.py'
Dec 10 19:46:26 compute-0 sudo[207025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:26 compute-0 python3.9[207027]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:46:27 compute-0 systemd[1]: Started libpod-conmon-6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69.scope.
Dec 10 19:46:27 compute-0 podman[207028]: 2025-12-10 19:46:27.073782644 +0000 UTC m=+0.080012207 container exec 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 10 19:46:27 compute-0 podman[207028]: 2025-12-10 19:46:27.107044955 +0000 UTC m=+0.113274518 container exec_died 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 10 19:46:27 compute-0 systemd[1]: libpod-conmon-6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69.scope: Deactivated successfully.
Dec 10 19:46:27 compute-0 sudo[207025]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:27 compute-0 sudo[207208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tafrioztevobjkrsrndssyndrmvbmitb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395987.2942386-900-175230885690942/AnsiballZ_file.py'
Dec 10 19:46:27 compute-0 sudo[207208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:27 compute-0 python3.9[207210]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:27 compute-0 sudo[207208]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:28 compute-0 sudo[207360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjnvcvdhuyqwrnldbuvucrxdbsxcjipe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395987.9295576-909-245130181467842/AnsiballZ_podman_container_info.py'
Dec 10 19:46:28 compute-0 sudo[207360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:28 compute-0 python3.9[207362]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec 10 19:46:28 compute-0 sudo[207360]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:28 compute-0 sudo[207526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqakgkehymnmpxfquthhgwwfpyhjufir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395988.709533-917-204174204382687/AnsiballZ_podman_container_exec.py'
Dec 10 19:46:28 compute-0 sudo[207526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:29 compute-0 python3.9[207528]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:46:29 compute-0 systemd[1]: Started libpod-conmon-b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7.scope.
Dec 10 19:46:29 compute-0 podman[207529]: 2025-12-10 19:46:29.292466155 +0000 UTC m=+0.077843714 container exec b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 10 19:46:29 compute-0 podman[207529]: 2025-12-10 19:46:29.328978176 +0000 UTC m=+0.114355695 container exec_died b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 10 19:46:29 compute-0 sudo[207526]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:29 compute-0 systemd[1]: libpod-conmon-b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7.scope: Deactivated successfully.
Dec 10 19:46:29 compute-0 podman[207547]: 2025-12-10 19:46:29.390648509 +0000 UTC m=+0.091926602 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 10 19:46:29 compute-0 sudo[207729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqnrknhsdfdknzgqwvdoujfiwhcxishp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395989.516672-925-154848195486278/AnsiballZ_podman_container_exec.py'
Dec 10 19:46:29 compute-0 sudo[207729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:29 compute-0 python3.9[207731]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:46:30 compute-0 systemd[1]: Started libpod-conmon-b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7.scope.
Dec 10 19:46:30 compute-0 podman[207732]: 2025-12-10 19:46:30.032506806 +0000 UTC m=+0.075062444 container exec b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 10 19:46:30 compute-0 podman[207732]: 2025-12-10 19:46:30.041714944 +0000 UTC m=+0.084270532 container exec_died b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec 10 19:46:30 compute-0 systemd[1]: libpod-conmon-b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7.scope: Deactivated successfully.
Dec 10 19:46:30 compute-0 sudo[207729]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:30 compute-0 sudo[207913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbwzzmjftfpalezkzoqjjtfjpcmtonho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395990.2521958-933-237445410284638/AnsiballZ_file.py'
Dec 10 19:46:30 compute-0 sudo[207913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:30 compute-0 python3.9[207915]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:30 compute-0 sudo[207913]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:31 compute-0 sudo[208065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bobqcnrzkmblvqqghrfxlmawyivkyebb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395990.9698956-942-253502611469823/AnsiballZ_podman_container_info.py'
Dec 10 19:46:31 compute-0 sudo[208065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:31 compute-0 python3.9[208067]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 10 19:46:31 compute-0 sudo[208065]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:31 compute-0 sudo[208231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emkezawwbnnnaxdqqciuvltecsxeexbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395991.7160468-950-209997852135490/AnsiballZ_podman_container_exec.py'
Dec 10 19:46:31 compute-0 sudo[208231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:32 compute-0 python3.9[208233]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:46:32 compute-0 systemd[1]: Started libpod-conmon-84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1.scope.
Dec 10 19:46:32 compute-0 podman[208234]: 2025-12-10 19:46:32.268048373 +0000 UTC m=+0.092233868 container exec 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 10 19:46:32 compute-0 podman[208234]: 2025-12-10 19:46:32.300004462 +0000 UTC m=+0.124189987 container exec_died 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 10 19:46:32 compute-0 systemd[1]: libpod-conmon-84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1.scope: Deactivated successfully.
Dec 10 19:46:32 compute-0 sudo[208231]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:32 compute-0 sudo[208413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfpckgueortqkgtwnhxpuhuclusubanc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395992.4879076-958-6404100752183/AnsiballZ_podman_container_exec.py'
Dec 10 19:46:32 compute-0 sudo[208413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:33 compute-0 python3.9[208415]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:46:33 compute-0 systemd[1]: Started libpod-conmon-84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1.scope.
Dec 10 19:46:33 compute-0 podman[208416]: 2025-12-10 19:46:33.09860642 +0000 UTC m=+0.083197685 container exec 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251210)
Dec 10 19:46:33 compute-0 podman[208416]: 2025-12-10 19:46:33.131951823 +0000 UTC m=+0.116543068 container exec_died 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251210, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec 10 19:46:33 compute-0 systemd[1]: libpod-conmon-84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1.scope: Deactivated successfully.
Dec 10 19:46:33 compute-0 sudo[208413]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:33 compute-0 sudo[208596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okbblfndhgtqhczcopyxotkjjxnguihx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395993.325358-966-107305554144667/AnsiballZ_file.py'
Dec 10 19:46:33 compute-0 sudo[208596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:33 compute-0 python3.9[208598]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:33 compute-0 sudo[208596]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:34 compute-0 sudo[208748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phowsguylisxeyxegfcwsnjqeptalmes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395994.0428872-975-22313203433195/AnsiballZ_podman_container_info.py'
Dec 10 19:46:34 compute-0 sudo[208748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:34 compute-0 python3.9[208750]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 10 19:46:34 compute-0 sudo[208748]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:34 compute-0 nova_compute[189279]: 2025-12-10 19:46:34.820 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:46:34 compute-0 nova_compute[189279]: 2025-12-10 19:46:34.843 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:46:35 compute-0 sudo[208928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbaeofrakzqctofhneylztliqnoavgvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395994.7762551-983-104184737483021/AnsiballZ_podman_container_exec.py'
Dec 10 19:46:35 compute-0 sudo[208928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:35 compute-0 podman[208888]: 2025-12-10 19:46:35.095448604 +0000 UTC m=+0.069535067 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:46:35 compute-0 python3.9[208936]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:46:35 compute-0 systemd[1]: Started libpod-conmon-22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f.scope.
Dec 10 19:46:35 compute-0 podman[208938]: 2025-12-10 19:46:35.367672036 +0000 UTC m=+0.077303640 container exec 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 19:46:35 compute-0 podman[208938]: 2025-12-10 19:46:35.401356477 +0000 UTC m=+0.110988081 container exec_died 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:46:35 compute-0 systemd[1]: libpod-conmon-22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f.scope: Deactivated successfully.
Dec 10 19:46:35 compute-0 sudo[208928]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:35 compute-0 nova_compute[189279]: 2025-12-10 19:46:35.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:46:35 compute-0 nova_compute[189279]: 2025-12-10 19:46:35.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:46:35 compute-0 nova_compute[189279]: 2025-12-10 19:46:35.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:46:35 compute-0 nova_compute[189279]: 2025-12-10 19:46:35.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:46:35 compute-0 nova_compute[189279]: 2025-12-10 19:46:35.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:46:35 compute-0 nova_compute[189279]: 2025-12-10 19:46:35.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:46:35 compute-0 sudo[209121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddiixugcawaqtvbnsgdebphygigowrnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395995.5850215-991-129761175083872/AnsiballZ_podman_container_exec.py'
Dec 10 19:46:35 compute-0 sudo[209121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:36 compute-0 python3.9[209123]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:46:36 compute-0 systemd[1]: Started libpod-conmon-22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f.scope.
Dec 10 19:46:36 compute-0 podman[209124]: 2025-12-10 19:46:36.152533424 +0000 UTC m=+0.071675061 container exec 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:46:36 compute-0 podman[209124]: 2025-12-10 19:46:36.186835841 +0000 UTC m=+0.105977458 container exec_died 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:46:36 compute-0 systemd[1]: libpod-conmon-22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f.scope: Deactivated successfully.
Dec 10 19:46:36 compute-0 sudo[209121]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.503 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.503 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.503 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.528 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.528 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.529 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.529 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.676 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.677 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5842MB free_disk=72.4331283569336GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.677 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.678 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:46:36 compute-0 sudo[209315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkryxjngfkjdzgkwvbimbwnzkqpituhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395996.425098-999-239366145680490/AnsiballZ_file.py'
Dec 10 19:46:36 compute-0 sudo[209315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:36 compute-0 podman[209280]: 2025-12-10 19:46:36.724497656 +0000 UTC m=+0.052304113 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.746 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.746 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.766 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.785 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.786 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:46:36 compute-0 nova_compute[189279]: 2025-12-10 19:46:36.786 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:46:36 compute-0 python3.9[209332]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:36 compute-0 sudo[209315]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:37 compute-0 sudo[209482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbjtvzlibfgkrezxwowjodnibpwbdkvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395997.1394258-1008-74318065606062/AnsiballZ_podman_container_info.py'
Dec 10 19:46:37 compute-0 sudo[209482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:37 compute-0 python3.9[209484]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 10 19:46:37 compute-0 sudo[209482]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:38 compute-0 sudo[209659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqzsdvncdyxtslfbfqddzdynksjpgrhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395997.7778842-1016-22027336902434/AnsiballZ_podman_container_exec.py'
Dec 10 19:46:38 compute-0 sudo[209659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:38 compute-0 podman[209621]: 2025-12-10 19:46:38.061105407 +0000 UTC m=+0.073386012 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 10 19:46:38 compute-0 python3.9[209667]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:46:38 compute-0 systemd[1]: Started libpod-conmon-e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56.scope.
Dec 10 19:46:38 compute-0 podman[209675]: 2025-12-10 19:46:38.32566036 +0000 UTC m=+0.090934347 container exec e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 19:46:38 compute-0 podman[209675]: 2025-12-10 19:46:38.35682391 +0000 UTC m=+0.122097897 container exec_died e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 19:46:38 compute-0 systemd[1]: libpod-conmon-e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56.scope: Deactivated successfully.
Dec 10 19:46:38 compute-0 sudo[209659]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:38 compute-0 sudo[209856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnftuaogkyodyiiykmycbnkimxtkpysg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395998.549669-1024-63934714069186/AnsiballZ_podman_container_exec.py'
Dec 10 19:46:38 compute-0 sudo[209856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:39 compute-0 python3.9[209858]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:46:39 compute-0 systemd[1]: Started libpod-conmon-e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56.scope.
Dec 10 19:46:39 compute-0 podman[209859]: 2025-12-10 19:46:39.097138238 +0000 UTC m=+0.072107441 container exec e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 19:46:39 compute-0 podman[209879]: 2025-12-10 19:46:39.160770799 +0000 UTC m=+0.051491692 container exec_died e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 19:46:39 compute-0 podman[209859]: 2025-12-10 19:46:39.166163563 +0000 UTC m=+0.141132766 container exec_died e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 19:46:39 compute-0 systemd[1]: libpod-conmon-e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56.scope: Deactivated successfully.
Dec 10 19:46:39 compute-0 sudo[209856]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:39 compute-0 sudo[210041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxpnhbteowoffqnecsweyjppkudlwknz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395999.3428218-1032-42965324851448/AnsiballZ_file.py'
Dec 10 19:46:39 compute-0 sudo[210041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:39 compute-0 python3.9[210043]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:39 compute-0 sudo[210041]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:40 compute-0 sudo[210193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prywfhcizgybcjeuqhrecohonuqihdin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765395999.9876027-1041-75656573464033/AnsiballZ_podman_container_info.py'
Dec 10 19:46:40 compute-0 sudo[210193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:40 compute-0 python3.9[210195]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 10 19:46:40 compute-0 sudo[210193]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:40 compute-0 sudo[210358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihctpnzzbpjsifyhbudbowiskwlfcnmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396000.716454-1049-224864177787130/AnsiballZ_podman_container_exec.py'
Dec 10 19:46:40 compute-0 sudo[210358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:41 compute-0 python3.9[210360]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:46:41 compute-0 systemd[1]: Started libpod-conmon-d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7.scope.
Dec 10 19:46:41 compute-0 podman[210361]: 2025-12-10 19:46:41.246740754 +0000 UTC m=+0.071800165 container exec d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter)
Dec 10 19:46:41 compute-0 podman[210361]: 2025-12-10 19:46:41.275864722 +0000 UTC m=+0.100924133 container exec_died d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git)
Dec 10 19:46:41 compute-0 systemd[1]: libpod-conmon-d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7.scope: Deactivated successfully.
Dec 10 19:46:41 compute-0 sudo[210358]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:41 compute-0 sudo[210542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szazdmenjyeieujisrqhqrfxuaohdiyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396001.453485-1057-231828532335453/AnsiballZ_podman_container_exec.py'
Dec 10 19:46:41 compute-0 sudo[210542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:41 compute-0 python3.9[210544]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:46:41 compute-0 systemd[1]: Started libpod-conmon-d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7.scope.
Dec 10 19:46:41 compute-0 podman[210545]: 2025-12-10 19:46:41.970973245 +0000 UTC m=+0.070973544 container exec d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=)
Dec 10 19:46:42 compute-0 podman[210545]: 2025-12-10 19:46:42.004884132 +0000 UTC m=+0.104884401 container exec_died d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350)
Dec 10 19:46:42 compute-0 systemd[1]: libpod-conmon-d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7.scope: Deactivated successfully.
Dec 10 19:46:42 compute-0 sudo[210542]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:42 compute-0 sudo[210726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqlkvswvwiljcrayochkqugixdxfkopp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396002.1925366-1065-85178575488151/AnsiballZ_file.py'
Dec 10 19:46:42 compute-0 sudo[210726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:42 compute-0 python3.9[210728]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:42 compute-0 sudo[210726]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:43 compute-0 sudo[210878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veyfvrahqxzwdmnvollafhpllifsyqze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396002.8499634-1074-214865444745074/AnsiballZ_file.py'
Dec 10 19:46:43 compute-0 sudo[210878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:43 compute-0 python3.9[210880]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:43 compute-0 sudo[210878]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:43 compute-0 sudo[211041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doqvhwppouyxmdkpipmxzjczdhvcfhtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396003.4173038-1082-26541794829449/AnsiballZ_stat.py'
Dec 10 19:46:43 compute-0 sudo[211041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:43 compute-0 podman[211004]: 2025-12-10 19:46:43.923415142 +0000 UTC m=+0.063981860 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251210, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 10 19:46:44 compute-0 python3.9[211047]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:46:44 compute-0 sudo[211041]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:44 compute-0 sudo[211171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txpoblorklgwzmlwfjtpufvxxnriyrsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396003.4173038-1082-26541794829449/AnsiballZ_copy.py'
Dec 10 19:46:44 compute-0 sudo[211171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:44 compute-0 python3.9[211173]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765396003.4173038-1082-26541794829449/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:44 compute-0 sudo[211171]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:45 compute-0 sudo[211323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbpdhilsllwsssykcxffgmeglxfgapkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396004.7987769-1098-18577011083962/AnsiballZ_file.py'
Dec 10 19:46:45 compute-0 sudo[211323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:45 compute-0 python3.9[211325]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:45 compute-0 sudo[211323]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:45 compute-0 sudo[211475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxqqotkbeacmnnwonwqaotpwsuxampyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396005.538389-1106-29989043576873/AnsiballZ_stat.py'
Dec 10 19:46:45 compute-0 sudo[211475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:46 compute-0 python3.9[211477]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:46:46 compute-0 sudo[211475]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:46 compute-0 sudo[211553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuuqvjjxmnqquoyhiljlnpdwoarxzjox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396005.538389-1106-29989043576873/AnsiballZ_file.py'
Dec 10 19:46:46 compute-0 sudo[211553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:46 compute-0 python3.9[211555]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:46 compute-0 sudo[211553]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:47 compute-0 sudo[211705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqnhcbrkyadgcxwxsdyiupkpyvhruqvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396006.778844-1118-153026924163720/AnsiballZ_stat.py'
Dec 10 19:46:47 compute-0 sudo[211705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:47 compute-0 python3.9[211707]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:46:47 compute-0 sudo[211705]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:47 compute-0 sudo[211783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxeftgdshwiwoiwasuwzyzszhdogiggo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396006.778844-1118-153026924163720/AnsiballZ_file.py'
Dec 10 19:46:47 compute-0 sudo[211783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:47 compute-0 python3.9[211785]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.85vxqu14 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:47 compute-0 sudo[211783]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:48 compute-0 sudo[211935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkiuiuzjqogzgvbnynkdxbnyedwtaicj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396007.8370168-1130-251681993806210/AnsiballZ_stat.py'
Dec 10 19:46:48 compute-0 sudo[211935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:48 compute-0 python3.9[211937]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:46:48 compute-0 sudo[211935]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:48 compute-0 sudo[212013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejzecekzwttombxinkwwpzmczsmntxim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396007.8370168-1130-251681993806210/AnsiballZ_file.py'
Dec 10 19:46:48 compute-0 sudo[212013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:48 compute-0 python3.9[212015]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:48 compute-0 sudo[212013]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:49 compute-0 sudo[212165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjjltxbyvraonoccgplzduedohfcuulg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396008.8990402-1143-175144110823511/AnsiballZ_command.py'
Dec 10 19:46:49 compute-0 sudo[212165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:49 compute-0 python3.9[212167]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:46:49 compute-0 sudo[212165]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:49 compute-0 sudo[212318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjixjyfunevujwpdwiumotzskoswotev ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765396009.5207584-1151-92566411511018/AnsiballZ_edpm_nftables_from_files.py'
Dec 10 19:46:49 compute-0 sudo[212318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:50 compute-0 python3[212320]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 10 19:46:50 compute-0 sudo[212318]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:50 compute-0 sudo[212484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pslnaurxxuscxenvvztuedmswqdfljus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396010.378296-1159-268700514404174/AnsiballZ_stat.py'
Dec 10 19:46:50 compute-0 sudo[212484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:50 compute-0 podman[212444]: 2025-12-10 19:46:50.787143912 +0000 UTC m=+0.084120479 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 10 19:46:50 compute-0 python3.9[212492]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:46:51 compute-0 sudo[212484]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:51 compute-0 rsyslogd[1003]: imjournal: 330 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 10 19:46:51 compute-0 sudo[212570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiqhqysjzbxhujgkknztlukirbvajgje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396010.378296-1159-268700514404174/AnsiballZ_file.py'
Dec 10 19:46:51 compute-0 sudo[212570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:51 compute-0 python3.9[212572]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:51 compute-0 sudo[212570]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:52 compute-0 sudo[212722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tloitlprfaxbmnyyghhucilrgvnhrcxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396011.7358654-1171-263508640021705/AnsiballZ_stat.py'
Dec 10 19:46:52 compute-0 sudo[212722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:52 compute-0 python3.9[212724]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:46:52 compute-0 sudo[212722]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:52 compute-0 sudo[212800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrcxxcvscvpoybybtxjnfdonsazienfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396011.7358654-1171-263508640021705/AnsiballZ_file.py'
Dec 10 19:46:52 compute-0 sudo[212800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:52 compute-0 python3.9[212802]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:52 compute-0 sudo[212800]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:53 compute-0 sudo[212966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvpqrijacsisjinxrbvcetrebrlpmgub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396012.8818169-1183-122338083604643/AnsiballZ_stat.py'
Dec 10 19:46:53 compute-0 sudo[212966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:53 compute-0 podman[212926]: 2025-12-10 19:46:53.219892538 +0000 UTC m=+0.076890320 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 19:46:53 compute-0 python3.9[212972]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:46:53 compute-0 sudo[212966]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:53 compute-0 sudo[213055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlcpcxlgllysrvlbpzqcqhtmyhmwkcvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396012.8818169-1183-122338083604643/AnsiballZ_file.py'
Dec 10 19:46:53 compute-0 sudo[213055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:53 compute-0 python3.9[213057]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:53 compute-0 sudo[213055]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:54 compute-0 sudo[213207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgdxiutwukbcolcrhndfoofjdftvfxfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396013.9889176-1195-222928097483573/AnsiballZ_stat.py'
Dec 10 19:46:54 compute-0 sudo[213207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:54 compute-0 python3.9[213209]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:46:54 compute-0 sudo[213207]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:54 compute-0 sudo[213285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqendufrmnuwfvfrvoryikhebdosrwtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396013.9889176-1195-222928097483573/AnsiballZ_file.py'
Dec 10 19:46:54 compute-0 sudo[213285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:54 compute-0 python3.9[213287]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:54 compute-0 sudo[213285]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:55 compute-0 sudo[213437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfczghyzhhclprpvrdzbtrfdvidwmyus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396015.0493627-1207-252311401803792/AnsiballZ_stat.py'
Dec 10 19:46:55 compute-0 sudo[213437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:55 compute-0 python3.9[213439]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:46:55 compute-0 sudo[213437]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:55 compute-0 sudo[213562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byteiqginwkvjntxsvknzbvuswdlmqfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396015.0493627-1207-252311401803792/AnsiballZ_copy.py'
Dec 10 19:46:55 compute-0 sudo[213562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:56 compute-0 python3.9[213564]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765396015.0493627-1207-252311401803792/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:56 compute-0 sudo[213562]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:56 compute-0 sudo[213714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfravsbpipjhxoqjawrvurdsyszjzfdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396016.325575-1222-23904173353243/AnsiballZ_file.py'
Dec 10 19:46:56 compute-0 sudo[213714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:56 compute-0 python3.9[213716]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:56 compute-0 sudo[213714]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:57 compute-0 sudo[213866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqqwrcjrclqzvcjfeypsdtpfwbtrunyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396017.0378678-1230-276720222192324/AnsiballZ_command.py'
Dec 10 19:46:57 compute-0 sudo[213866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:57 compute-0 python3.9[213868]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:46:57 compute-0 sudo[213866]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:58 compute-0 sudo[214021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klkfuhmguynydxeukzzrhgtknzqthrco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396017.7710228-1238-110028618055383/AnsiballZ_blockinfile.py'
Dec 10 19:46:58 compute-0 sudo[214021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:58 compute-0 python3.9[214023]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:46:58 compute-0 sudo[214021]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:58 compute-0 sudo[214173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlipdykhslycukpeeavfyafbqzsiimzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396018.629964-1247-186383017929025/AnsiballZ_command.py'
Dec 10 19:46:58 compute-0 sudo[214173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:59 compute-0 python3.9[214175]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:46:59 compute-0 sudo[214173]: pam_unix(sudo:session): session closed for user root
Dec 10 19:46:59 compute-0 sudo[214343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqprtieeidazcpktcjsbubishbohexyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396019.240774-1255-258280113680366/AnsiballZ_stat.py'
Dec 10 19:46:59 compute-0 sudo[214343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:46:59 compute-0 podman[214300]: 2025-12-10 19:46:59.524566783 +0000 UTC m=+0.074365828 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 10 19:46:59 compute-0 python3.9[214347]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:46:59 compute-0 sudo[214343]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:00 compute-0 sudo[214499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycrucudufbpsgghlinjxwbgzhsmkunru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396019.8575706-1263-34580520085624/AnsiballZ_command.py'
Dec 10 19:47:00 compute-0 sudo[214499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:00 compute-0 python3.9[214501]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:47:00 compute-0 sudo[214499]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:00 compute-0 podman[203484]: time="2025-12-10T19:47:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:47:00 compute-0 podman[203484]: @ - - [10/Dec/2025:19:47:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22542 "" "Go-http-client/1.1"
Dec 10 19:47:00 compute-0 podman[203484]: @ - - [10/Dec/2025:19:47:00 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3401 "" "Go-http-client/1.1"
Dec 10 19:47:00 compute-0 sudo[214657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lorqanxlhxkfulrqvrvkcktmolwcylnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396020.6571097-1271-60250514942107/AnsiballZ_file.py'
Dec 10 19:47:00 compute-0 sudo[214657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:01 compute-0 python3.9[214659]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:01 compute-0 sudo[214657]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:01 compute-0 openstack_network_exporter[205632]: ERROR   19:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:47:01 compute-0 openstack_network_exporter[205632]: ERROR   19:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:47:01 compute-0 openstack_network_exporter[205632]: ERROR   19:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:47:01 compute-0 openstack_network_exporter[205632]: ERROR   19:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:47:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:47:01 compute-0 openstack_network_exporter[205632]: ERROR   19:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:47:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:47:01 compute-0 sshd-session[189623]: Connection closed by 192.168.122.30 port 35872
Dec 10 19:47:01 compute-0 sshd-session[189620]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:47:01 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Dec 10 19:47:01 compute-0 systemd[1]: session-26.scope: Consumed 1min 41.702s CPU time.
Dec 10 19:47:01 compute-0 systemd-logind[789]: Session 26 logged out. Waiting for processes to exit.
Dec 10 19:47:01 compute-0 systemd-logind[789]: Removed session 26.
Dec 10 19:47:06 compute-0 podman[214689]: 2025-12-10 19:47:06.100166321 +0000 UTC m=+0.080018981 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:47:07 compute-0 podman[214710]: 2025-12-10 19:47:07.075591152 +0000 UTC m=+0.061922370 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 19:47:07 compute-0 sshd-session[214735]: Accepted publickey for zuul from 192.168.122.30 port 50676 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:47:07 compute-0 systemd-logind[789]: New session 27 of user zuul.
Dec 10 19:47:07 compute-0 systemd[1]: Started Session 27 of User zuul.
Dec 10 19:47:07 compute-0 sshd-session[214735]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:47:07 compute-0 sudo[214888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxjunythsdmnokbkmartjuruafineodj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396027.2985528-24-156454777744166/AnsiballZ_systemd_service.py'
Dec 10 19:47:07 compute-0 sudo[214888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:08 compute-0 python3.9[214890]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:47:08 compute-0 systemd[1]: Reloading.
Dec 10 19:47:08 compute-0 systemd-sysv-generator[214936]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:47:08 compute-0 systemd-rc-local-generator[214933]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:47:08 compute-0 podman[214892]: 2025-12-10 19:47:08.498145806 +0000 UTC m=+0.165984480 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 10 19:47:09 compute-0 sudo[214888]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:10 compute-0 python3.9[215097]: ansible-ansible.builtin.service_facts Invoked
Dec 10 19:47:10 compute-0 network[215114]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 10 19:47:10 compute-0 network[215115]: 'network-scripts' will be removed from distribution in near future.
Dec 10 19:47:10 compute-0 network[215116]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 10 19:47:14 compute-0 podman[215211]: 2025-12-10 19:47:14.054893102 +0000 UTC m=+0.086131385 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 10 19:47:15 compute-0 sudo[215407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxaoejxecrynlwzapfmlmabmfavisnbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396034.9079776-47-281058373512879/AnsiballZ_systemd_service.py'
Dec 10 19:47:15 compute-0 sudo[215407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:15 compute-0 python3.9[215409]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:47:15 compute-0 sudo[215407]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:16 compute-0 sudo[215560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eonuzljuyytjzoolsxtswyshmebwopcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396035.8746629-57-58085674219037/AnsiballZ_file.py'
Dec 10 19:47:16 compute-0 sudo[215560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:16 compute-0 python3.9[215562]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:16 compute-0 sudo[215560]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:16 compute-0 sudo[215712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-widldqbdcbiehcqxkostjhwjukakadlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396036.619291-65-214736968754915/AnsiballZ_file.py'
Dec 10 19:47:16 compute-0 sudo[215712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:17 compute-0 python3.9[215714]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:17 compute-0 sudo[215712]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:17 compute-0 sudo[215864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nngxigygrzlyuxrsvfbgtcvceodemagz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396037.2750964-74-179581266449925/AnsiballZ_command.py'
Dec 10 19:47:17 compute-0 sudo[215864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:17 compute-0 python3.9[215866]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:47:17 compute-0 sudo[215864]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:18 compute-0 python3.9[216018]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 10 19:47:19 compute-0 sudo[216168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inzkqxonxehpgsxguaoakorhgewcykqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396038.8855274-92-175284687006994/AnsiballZ_systemd_service.py'
Dec 10 19:47:19 compute-0 sudo[216168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:19 compute-0 python3.9[216170]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:47:19 compute-0 systemd[1]: Reloading.
Dec 10 19:47:19 compute-0 systemd-sysv-generator[216201]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:47:19 compute-0 systemd-rc-local-generator[216197]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:47:19 compute-0 sudo[216168]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:20 compute-0 sudo[216354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjtrxjwxkdzgqedhfduynbnfblfehbpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396040.0254035-100-195912624937189/AnsiballZ_command.py'
Dec 10 19:47:20 compute-0 sudo[216354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:20 compute-0 python3.9[216356]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:47:20 compute-0 sudo[216354]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:21 compute-0 sudo[216526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkvuzhgfeudvjduhgqwuwduiijywzsjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396040.7811387-109-49315154392042/AnsiballZ_file.py'
Dec 10 19:47:21 compute-0 sudo[216526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:21 compute-0 podman[216480]: 2025-12-10 19:47:21.095785622 +0000 UTC m=+0.080779211 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Dec 10 19:47:21 compute-0 python3.9[216532]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:47:21 compute-0 sudo[216526]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:22 compute-0 python3.9[216682]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:47:22 compute-0 python3.9[216834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:47:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:47:23.356 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:47:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:47:23.356 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:47:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:47:23.356 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:47:23 compute-0 podman[216929]: 2025-12-10 19:47:23.490486789 +0000 UTC m=+0.062845694 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:47:23 compute-0 python3.9[216968]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765396042.3772197-125-827334938829/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:47:24 compute-0 sudo[217130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaesvhdjwogyrypixysolcfasrdnlbeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396043.996555-143-154385173683733/AnsiballZ_getent.py'
Dec 10 19:47:24 compute-0 sudo[217130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:24 compute-0 python3.9[217132]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 10 19:47:24 compute-0 sudo[217130]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:25 compute-0 python3.9[217283]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:47:26 compute-0 python3.9[217404]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765396045.4262316-171-269310506986554/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:26 compute-0 python3.9[217554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:47:27 compute-0 python3.9[217675]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765396046.5043044-171-46709990578319/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:28 compute-0 python3.9[217825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:47:28 compute-0 python3.9[217946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765396047.5792444-171-260394886965743/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:29 compute-0 python3.9[218096]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:47:29 compute-0 podman[203484]: time="2025-12-10T19:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:47:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22542 "" "Go-http-client/1.1"
Dec 10 19:47:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3417 "" "Go-http-client/1.1"
Dec 10 19:47:29 compute-0 podman[218222]: 2025-12-10 19:47:29.914664229 +0000 UTC m=+0.081188563 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 10 19:47:30 compute-0 python3.9[218265]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:47:30 compute-0 python3.9[218419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:47:31 compute-0 python3.9[218540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765396050.2388427-230-199900009299373/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:31 compute-0 openstack_network_exporter[205632]: ERROR   19:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:47:31 compute-0 openstack_network_exporter[205632]: ERROR   19:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:47:31 compute-0 openstack_network_exporter[205632]: ERROR   19:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:47:31 compute-0 openstack_network_exporter[205632]: ERROR   19:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:47:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:47:31 compute-0 openstack_network_exporter[205632]: ERROR   19:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:47:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:47:31 compute-0 python3.9[218691]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:47:32 compute-0 python3.9[218767]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:32 compute-0 python3.9[218917]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:47:33 compute-0 python3.9[219038]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765396052.5251226-230-150722845062177/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:34 compute-0 python3.9[219188]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:47:34 compute-0 python3.9[219309]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765396053.5980191-230-182182558694956/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:34 compute-0 nova_compute[189279]: 2025-12-10 19:47:34.770 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:47:35 compute-0 python3.9[219459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:47:35 compute-0 nova_compute[189279]: 2025-12-10 19:47:35.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:47:35 compute-0 python3.9[219580]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765396054.776313-230-259546055488312/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:36 compute-0 podman[219704]: 2025-12-10 19:47:36.19167827 +0000 UTC m=+0.048807441 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 10 19:47:36 compute-0 python3.9[219746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.482 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.520 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.520 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.521 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.521 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.675 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.676 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5806MB free_disk=72.43218231201172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.676 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.676 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.747 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.747 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.766 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.782 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.783 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:47:36 compute-0 nova_compute[189279]: 2025-12-10 19:47:36.784 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:47:36 compute-0 python3.9[219870]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765396055.8721793-230-178722885263123/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:37 compute-0 podman[219994]: 2025-12-10 19:47:37.345229703 +0000 UTC m=+0.048842832 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 19:47:37 compute-0 python3.9[220036]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:47:37 compute-0 nova_compute[189279]: 2025-12-10 19:47:37.784 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:47:37 compute-0 nova_compute[189279]: 2025-12-10 19:47:37.785 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:47:37 compute-0 nova_compute[189279]: 2025-12-10 19:47:37.785 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 19:47:37 compute-0 nova_compute[189279]: 2025-12-10 19:47:37.800 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 10 19:47:37 compute-0 python3.9[220121]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:38 compute-0 sudo[220271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvchprcajacupuovawnjpzukeqnrawuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396058.0990405-325-123665691172195/AnsiballZ_file.py'
Dec 10 19:47:38 compute-0 sudo[220271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:38 compute-0 python3.9[220273]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:38 compute-0 sudo[220271]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:38 compute-0 sudo[220423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zylrmkbssklfyyrzojllvdcpyyjnaspo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396058.722483-333-264262898337410/AnsiballZ_file.py'
Dec 10 19:47:38 compute-0 sudo[220423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:39 compute-0 python3.9[220425]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:39 compute-0 sudo[220423]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:39 compute-0 sudo[220575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anfmrxhwfglcwvcbubzcphjdifwqvqwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396059.3460677-341-184053153097601/AnsiballZ_file.py'
Dec 10 19:47:39 compute-0 sudo[220575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:39 compute-0 podman[220577]: 2025-12-10 19:47:39.691626474 +0000 UTC m=+0.078665725 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 10 19:47:39 compute-0 python3.9[220578]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:47:39 compute-0 sudo[220575]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:40 compute-0 sudo[220753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvgowwfrpyhlalpcicpqtjcefdhqpzmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396059.9713097-349-195910211528452/AnsiballZ_stat.py'
Dec 10 19:47:40 compute-0 sudo[220753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:40 compute-0 python3.9[220755]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:47:40 compute-0 sudo[220753]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:40 compute-0 sudo[220876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skxsjsyerkdninosqhwqitlungovixqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396059.9713097-349-195910211528452/AnsiballZ_copy.py'
Dec 10 19:47:40 compute-0 sudo[220876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:40 compute-0 python3.9[220878]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765396059.9713097-349-195910211528452/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:47:40 compute-0 sudo[220876]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:41 compute-0 sudo[220952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsjrizsubuilmqxpozrviiymkxzhzzkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396059.9713097-349-195910211528452/AnsiballZ_stat.py'
Dec 10 19:47:41 compute-0 sudo[220952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:41 compute-0 python3.9[220954]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:47:41 compute-0 sudo[220952]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:41 compute-0 sudo[221075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iisbweozxnromcpbfnbvuebzytlgfntb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396059.9713097-349-195910211528452/AnsiballZ_copy.py'
Dec 10 19:47:41 compute-0 sudo[221075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:41 compute-0 python3.9[221077]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765396059.9713097-349-195910211528452/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:47:41 compute-0 sudo[221075]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.168 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.169 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.169 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.170 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.170 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.171 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.171 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.171 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.171 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.172 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.172 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.172 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.172 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.172 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.174 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.176 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa043e060>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.178 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.178 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.178 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.178 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.178 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.178 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.178 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.181 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.181 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.181 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.181 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.181 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.181 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.181 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.181 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:47:42.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:47:42 compute-0 sudo[221228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msqzdoutbkxlakilxyfhjrvmquohuqeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396062.0030456-349-174758837845712/AnsiballZ_stat.py'
Dec 10 19:47:42 compute-0 sudo[221228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:42 compute-0 python3.9[221230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:47:42 compute-0 sudo[221228]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:42 compute-0 sudo[221351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpknrjyoppmsmwefjjlpxiadmhevzotu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396062.0030456-349-174758837845712/AnsiballZ_copy.py'
Dec 10 19:47:42 compute-0 sudo[221351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:42 compute-0 python3.9[221353]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765396062.0030456-349-174758837845712/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 10 19:47:43 compute-0 sudo[221351]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:43 compute-0 sudo[221503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgydfjipkgdvixtrbnasjtkszzbmohth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396063.2836869-391-176098220698664/AnsiballZ_container_config_data.py'
Dec 10 19:47:43 compute-0 sudo[221503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:43 compute-0 python3.9[221505]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec 10 19:47:43 compute-0 sudo[221503]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:44 compute-0 sudo[221666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouidgnvbkkhqtalxezgimstqzphtogka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396064.1833153-400-242747425169201/AnsiballZ_container_config_hash.py'
Dec 10 19:47:44 compute-0 sudo[221666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:44 compute-0 podman[221629]: 2025-12-10 19:47:44.629939053 +0000 UTC m=+0.072739128 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 10 19:47:44 compute-0 python3.9[221671]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 10 19:47:44 compute-0 sudo[221666]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:45 compute-0 sudo[221825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyayxgtrlzbxwobmwgxzrvyudxnyuewv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765396065.0699253-410-42732238178104/AnsiballZ_edpm_container_manage.py'
Dec 10 19:47:45 compute-0 sudo[221825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:45 compute-0 python3[221827]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec 10 19:47:46 compute-0 podman[221863]: 2025-12-10 19:47:46.004625983 +0000 UTC m=+0.052386916 container create e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 10 19:47:46 compute-0 podman[221863]: 2025-12-10 19:47:45.975667802 +0000 UTC m=+0.023428765 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec 10 19:47:46 compute-0 python3[221827]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Dec 10 19:47:46 compute-0 sudo[221825]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:46 compute-0 sudo[222051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coegucauuzjufuacjttlbxahpdofgwbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396066.2732465-418-141780089534142/AnsiballZ_stat.py'
Dec 10 19:47:46 compute-0 sudo[222051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:46 compute-0 python3.9[222053]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:47:46 compute-0 sudo[222051]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:47 compute-0 sudo[222205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkqiyscyqfdajwluptekzovoqzpstjxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396066.943137-427-154948028068152/AnsiballZ_file.py'
Dec 10 19:47:47 compute-0 sudo[222205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:47 compute-0 python3.9[222207]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:47 compute-0 sudo[222205]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:47 compute-0 sudo[222356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htbmqzxrizeaoupkxxjxrtbfhowlhybz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396067.4901571-427-262541143199185/AnsiballZ_copy.py'
Dec 10 19:47:47 compute-0 sudo[222356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:48 compute-0 python3.9[222358]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765396067.4901571-427-262541143199185/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:48 compute-0 sudo[222356]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:48 compute-0 sudo[222432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gabqqxohbadbnjftdjgtilgkppnvpfag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396067.4901571-427-262541143199185/AnsiballZ_systemd.py'
Dec 10 19:47:48 compute-0 sudo[222432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:49 compute-0 python3.9[222434]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:47:49 compute-0 systemd[1]: Reloading.
Dec 10 19:47:49 compute-0 systemd-rc-local-generator[222458]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:47:49 compute-0 systemd-sysv-generator[222462]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:47:49 compute-0 sudo[222432]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:49 compute-0 sudo[222543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqfwkvsapqdelphlmxequrvvyzttzawq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396067.4901571-427-262541143199185/AnsiballZ_systemd.py'
Dec 10 19:47:49 compute-0 sudo[222543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:49 compute-0 python3.9[222545]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:47:50 compute-0 systemd[1]: Reloading.
Dec 10 19:47:50 compute-0 systemd-sysv-generator[222577]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:47:50 compute-0 systemd-rc-local-generator[222574]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:47:50 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 10 19:47:50 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834606985f6dd20fba1dbcdb87d389e876a94f6cb5f86cb64fd767b2c7fd4a82/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 10 19:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834606985f6dd20fba1dbcdb87d389e876a94f6cb5f86cb64fd767b2c7fd4a82/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 10 19:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834606985f6dd20fba1dbcdb87d389e876a94f6cb5f86cb64fd767b2c7fd4a82/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 10 19:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834606985f6dd20fba1dbcdb87d389e876a94f6cb5f86cb64fd767b2c7fd4a82/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 10 19:47:50 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953.
Dec 10 19:47:50 compute-0 podman[222584]: 2025-12-10 19:47:50.491846351 +0000 UTC m=+0.161377677 container init e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: + sudo -E kolla_set_configs
Dec 10 19:47:50 compute-0 podman[222584]: 2025-12-10 19:47:50.515877402 +0000 UTC m=+0.185408728 container start e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 10 19:47:50 compute-0 sudo[222606]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 10 19:47:50 compute-0 podman[222584]: ceilometer_agent_ipmi
Dec 10 19:47:50 compute-0 sudo[222606]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 10 19:47:50 compute-0 sudo[222606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 10 19:47:50 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Dec 10 19:47:50 compute-0 sudo[222543]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Validating config file
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Copying service configuration files
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: INFO:__main__:Writing out command to execute
Dec 10 19:47:50 compute-0 sudo[222606]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: ++ cat /run_command
Dec 10 19:47:50 compute-0 podman[222607]: 2025-12-10 19:47:50.583588044 +0000 UTC m=+0.057363088 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: + ARGS=
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: + sudo kolla_copy_cacerts
Dec 10 19:47:50 compute-0 systemd[1]: e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953-70b1cdc42c4597a5.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 19:47:50 compute-0 systemd[1]: e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953-70b1cdc42c4597a5.service: Failed with result 'exit-code'.
Dec 10 19:47:50 compute-0 sudo[222629]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 10 19:47:50 compute-0 sudo[222629]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 10 19:47:50 compute-0 sudo[222629]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 10 19:47:50 compute-0 sudo[222629]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: + [[ ! -n '' ]]
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: + . kolla_extend_start
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: + umask 0022
Dec 10 19:47:50 compute-0 ceilometer_agent_ipmi[222600]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec 10 19:47:51 compute-0 sudo[222781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgppvkzayylakdedichvbjfwqhskevqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396070.855548-453-176601378371607/AnsiballZ_container_config_data.py'
Dec 10 19:47:51 compute-0 sudo[222781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:51 compute-0 python3.9[222783]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec 10 19:47:51 compute-0 sudo[222781]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.463 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.463 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.463 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.463 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.463 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.463 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.464 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.464 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.464 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.464 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.464 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.464 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.464 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.464 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.464 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.464 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.465 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.465 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.465 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.465 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.465 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.465 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.465 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.465 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.465 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.465 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.465 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.466 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.466 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.466 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.466 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.466 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.466 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.466 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.466 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.466 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.466 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.466 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.466 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.467 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.467 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.467 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.467 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.467 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.467 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.467 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.467 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.467 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.467 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.468 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.468 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.468 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.468 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.468 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.468 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.468 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.468 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.468 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.468 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.468 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.468 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.469 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.469 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.469 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.469 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.469 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.469 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.469 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.469 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.469 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.469 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.469 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.472 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.472 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.472 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.472 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.472 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.472 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.472 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.472 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.472 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.472 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.473 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.473 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.473 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.473 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.473 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.473 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.473 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.473 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.473 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.473 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.473 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.474 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.474 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.474 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.474 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.474 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.474 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.474 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.474 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.474 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.474 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.474 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.475 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.475 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.475 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.475 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.475 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.475 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.475 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.475 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.475 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.475 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.475 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.476 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.476 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.476 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.476 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.476 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.476 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.476 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.476 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.476 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.476 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.476 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.476 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.477 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.477 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.477 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.477 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.477 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.477 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.477 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.477 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.477 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.477 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.477 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.478 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.478 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.478 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.478 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.478 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.497 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.498 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.499 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 10 19:47:51 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:51.603 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp_a90j3ub/privsep.sock']
Dec 10 19:47:51 compute-0 sudo[222887]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmp_a90j3ub/privsep.sock
Dec 10 19:47:51 compute-0 sudo[222887]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 10 19:47:51 compute-0 sudo[222887]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 10 19:47:51 compute-0 sudo[222957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkwgiwovkbimpnejjtywjqvkvlqwkzwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396071.4750707-462-115013367282612/AnsiballZ_container_config_hash.py'
Dec 10 19:47:51 compute-0 sudo[222957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:51 compute-0 podman[222914]: 2025-12-10 19:47:51.759641995 +0000 UTC m=+0.059545556 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, version=9.6, container_name=openstack_network_exporter)
Dec 10 19:47:51 compute-0 python3.9[222963]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 10 19:47:51 compute-0 sudo[222957]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:52 compute-0 sudo[222887]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.262 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.263 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_a90j3ub/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.129 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.139 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.143 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.143 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.405 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.405 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.406 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.407 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.407 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.407 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.407 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.407 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.407 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.408 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.408 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.408 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.408 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.411 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.411 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.411 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.411 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.411 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.411 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.412 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.412 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.412 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.412 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.412 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.412 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.412 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.412 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.412 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.412 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.413 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.413 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.413 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.413 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.413 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.413 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.413 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.413 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.414 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.414 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.414 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.414 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.414 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.414 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.414 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.414 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.414 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.414 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.414 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.414 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.415 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.415 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.415 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.415 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.415 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.415 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.415 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.415 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.415 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.415 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.415 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.416 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.416 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.416 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.416 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.416 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.416 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.416 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.416 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.416 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.416 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.417 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.417 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.417 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.417 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.417 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.417 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.417 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.417 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.417 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.418 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.418 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.418 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.418 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.418 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.418 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.418 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.418 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.418 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.419 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.419 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.419 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.419 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.419 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.419 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.419 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.419 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.419 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.419 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.420 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.420 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.420 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.420 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.420 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.420 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.420 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.420 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.420 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.420 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.420 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.421 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.421 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.421 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.421 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.421 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.421 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.421 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.421 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.421 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.421 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.422 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.422 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.422 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.422 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.422 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.422 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.422 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.422 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.422 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.422 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.422 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.423 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.423 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.423 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.423 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.423 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.423 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.423 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.423 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.423 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.424 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.424 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.424 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.424 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.424 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.424 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.424 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.424 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.424 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.425 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.425 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.425 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.425 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.425 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.425 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.425 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.425 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.425 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.425 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.425 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.426 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.426 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.426 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.426 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.426 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.426 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.426 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.426 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.426 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.426 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.426 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.427 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.427 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.427 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.427 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.427 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.427 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.427 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.427 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.427 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.427 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.428 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.428 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.428 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.428 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.428 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.428 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.428 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.428 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.428 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.428 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.429 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.429 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.429 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.429 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.429 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.429 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.429 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.429 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.430 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.430 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.430 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.430 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.430 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.430 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.430 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.430 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.430 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.431 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.431 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.431 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.431 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.431 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.431 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.431 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.431 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.431 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 10 19:47:52 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:52.434 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 10 19:47:52 compute-0 sudo[223119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koeqezyyhbitqnvjhfutarkwpkcgbdqk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765396072.2715292-472-112290505136196/AnsiballZ_edpm_container_manage.py'
Dec 10 19:47:52 compute-0 sudo[223119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:52 compute-0 python3[223121]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec 10 19:47:53 compute-0 podman[223158]: 2025-12-10 19:47:53.104352457 +0000 UTC m=+0.047757952 container create ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, release-0.7.12=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.component=ubi9-container, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 19:47:53 compute-0 podman[223158]: 2025-12-10 19:47:53.077669467 +0000 UTC m=+0.021074982 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec 10 19:47:53 compute-0 python3[223121]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec 10 19:47:53 compute-0 sudo[223119]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:53 compute-0 sudo[223355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuhprysflxbivwlsjciyupgfbvghvcqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396073.4048707-480-261134527968356/AnsiballZ_stat.py'
Dec 10 19:47:53 compute-0 sudo[223355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:53 compute-0 podman[223320]: 2025-12-10 19:47:53.727377485 +0000 UTC m=+0.082391374 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 19:47:53 compute-0 python3.9[223357]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:47:53 compute-0 sudo[223355]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:54 compute-0 sudo[223527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlysxxahsbxdfiykpopjtqttywscunah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396074.1364734-489-99776560736850/AnsiballZ_file.py'
Dec 10 19:47:54 compute-0 sudo[223527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:54 compute-0 python3.9[223529]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:54 compute-0 sudo[223527]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:55 compute-0 sudo[223678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmjdpoedsmyafrxmcpdihhljsjhzycxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396074.7205565-489-183065217279372/AnsiballZ_copy.py'
Dec 10 19:47:55 compute-0 sudo[223678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:55 compute-0 python3.9[223680]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765396074.7205565-489-183065217279372/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:47:55 compute-0 sudo[223678]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:55 compute-0 sudo[223754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caqeovxeksjryzodpxdeekwgcvtmosjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396074.7205565-489-183065217279372/AnsiballZ_systemd.py'
Dec 10 19:47:55 compute-0 sudo[223754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:56 compute-0 python3.9[223756]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 10 19:47:56 compute-0 systemd[1]: Reloading.
Dec 10 19:47:56 compute-0 systemd-rc-local-generator[223786]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:47:56 compute-0 systemd-sysv-generator[223790]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:47:56 compute-0 sudo[223754]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:56 compute-0 sudo[223866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gukthfukkzijwwoghdpkagiwcqthkrsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396074.7205565-489-183065217279372/AnsiballZ_systemd.py'
Dec 10 19:47:56 compute-0 sudo[223866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:56 compute-0 python3.9[223868]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 10 19:47:57 compute-0 systemd[1]: Reloading.
Dec 10 19:47:57 compute-0 systemd-rc-local-generator[223897]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 10 19:47:57 compute-0 systemd-sysv-generator[223900]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 10 19:47:57 compute-0 systemd[1]: Starting kepler container...
Dec 10 19:47:57 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:47:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854.
Dec 10 19:47:57 compute-0 podman[223908]: 2025-12-10 19:47:57.460340323 +0000 UTC m=+0.117790097 container init ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.tags=base rhel9, io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 19:47:57 compute-0 kepler[223923]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 10 19:47:57 compute-0 kepler[223923]: I1210 19:47:57.490864       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec 10 19:47:57 compute-0 kepler[223923]: I1210 19:47:57.491013       1 config.go:293] using gCgroup ID in the BPF program: true
Dec 10 19:47:57 compute-0 kepler[223923]: I1210 19:47:57.491038       1 config.go:295] kernel version: 5.14
Dec 10 19:47:57 compute-0 kepler[223923]: I1210 19:47:57.491611       1 power.go:78] Unable to obtain power, use estimate method
Dec 10 19:47:57 compute-0 kepler[223923]: I1210 19:47:57.491631       1 redfish.go:169] failed to get redfish credential file path
Dec 10 19:47:57 compute-0 kepler[223923]: I1210 19:47:57.491950       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec 10 19:47:57 compute-0 kepler[223923]: I1210 19:47:57.491962       1 power.go:79] using none to obtain power
Dec 10 19:47:57 compute-0 kepler[223923]: E1210 19:47:57.491978       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec 10 19:47:57 compute-0 kepler[223923]: E1210 19:47:57.491998       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec 10 19:47:57 compute-0 podman[223908]: 2025-12-10 19:47:57.491924875 +0000 UTC m=+0.149374639 container start ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.expose-services=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0)
Dec 10 19:47:57 compute-0 kepler[223923]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 10 19:47:57 compute-0 kepler[223923]: I1210 19:47:57.493847       1 exporter.go:84] Number of CPUs: 8
Dec 10 19:47:57 compute-0 podman[223908]: kepler
Dec 10 19:47:57 compute-0 systemd[1]: Started kepler container.
Dec 10 19:47:57 compute-0 sudo[223866]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:57 compute-0 podman[223933]: 2025-12-10 19:47:57.558346613 +0000 UTC m=+0.056860894 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, vcs-type=git, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, config_id=edpm, release-0.7.12=, distribution-scope=public)
Dec 10 19:47:57 compute-0 systemd[1]: ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854-153ea58125408474.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 19:47:57 compute-0 systemd[1]: ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854-153ea58125408474.service: Failed with result 'exit-code'.
Dec 10 19:47:57 compute-0 sudo[224106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebdaqwheegeomrgjknjrvghtisjbtkif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396077.705333-513-6226602515280/AnsiballZ_systemd.py'
Dec 10 19:47:57 compute-0 sudo[224106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.053159       1 watcher.go:83] Using in cluster k8s config
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.053532       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 10 19:47:58 compute-0 kepler[223923]: E1210 19:47:58.053645       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.058393       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.058432       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.063077       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.063104       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.070151       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.070183       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.070197       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.076628       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.076658       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.076664       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.076669       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.076676       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.076688       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.076765       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.076792       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.076816       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.076843       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.077044       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec 10 19:47:58 compute-0 kepler[223923]: I1210 19:47:58.077439       1 exporter.go:208] Started Kepler in 586.794373ms
Dec 10 19:47:58 compute-0 python3.9[224108]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:47:58 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec 10 19:47:58 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:58.385 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 10 19:47:58 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:58.488 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec 10 19:47:58 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:58.488 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec 10 19:47:58 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:58.489 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec 10 19:47:58 compute-0 ceilometer_agent_ipmi[222600]: 2025-12-10 19:47:58.497 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec 10 19:47:58 compute-0 systemd[1]: libpod-e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953.scope: Deactivated successfully.
Dec 10 19:47:58 compute-0 systemd[1]: libpod-e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953.scope: Consumed 2.266s CPU time.
Dec 10 19:47:58 compute-0 podman[224122]: 2025-12-10 19:47:58.731833156 +0000 UTC m=+0.396480617 container died e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 10 19:47:58 compute-0 systemd[1]: e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953-70b1cdc42c4597a5.timer: Deactivated successfully.
Dec 10 19:47:58 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953.
Dec 10 19:47:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953-userdata-shm.mount: Deactivated successfully.
Dec 10 19:47:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-834606985f6dd20fba1dbcdb87d389e876a94f6cb5f86cb64fd767b2c7fd4a82-merged.mount: Deactivated successfully.
Dec 10 19:47:58 compute-0 podman[224122]: 2025-12-10 19:47:58.79653815 +0000 UTC m=+0.461185611 container cleanup e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 10 19:47:58 compute-0 podman[224122]: ceilometer_agent_ipmi
Dec 10 19:47:58 compute-0 podman[224149]: ceilometer_agent_ipmi
Dec 10 19:47:58 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec 10 19:47:58 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec 10 19:47:58 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 10 19:47:59 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834606985f6dd20fba1dbcdb87d389e876a94f6cb5f86cb64fd767b2c7fd4a82/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 10 19:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834606985f6dd20fba1dbcdb87d389e876a94f6cb5f86cb64fd767b2c7fd4a82/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 10 19:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834606985f6dd20fba1dbcdb87d389e876a94f6cb5f86cb64fd767b2c7fd4a82/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 10 19:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834606985f6dd20fba1dbcdb87d389e876a94f6cb5f86cb64fd767b2c7fd4a82/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 10 19:47:59 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953.
Dec 10 19:47:59 compute-0 podman[224161]: 2025-12-10 19:47:59.126300509 +0000 UTC m=+0.208220914 container init e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: + sudo -E kolla_set_configs
Dec 10 19:47:59 compute-0 podman[224161]: 2025-12-10 19:47:59.167037014 +0000 UTC m=+0.248957399 container start e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec 10 19:47:59 compute-0 podman[224161]: ceilometer_agent_ipmi
Dec 10 19:47:59 compute-0 sudo[224182]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 10 19:47:59 compute-0 sudo[224182]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 10 19:47:59 compute-0 sudo[224182]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 10 19:47:59 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Dec 10 19:47:59 compute-0 sudo[224106]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Validating config file
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Copying service configuration files
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: INFO:__main__:Writing out command to execute
Dec 10 19:47:59 compute-0 sudo[224182]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:59 compute-0 podman[224183]: 2025-12-10 19:47:59.258992122 +0000 UTC m=+0.071516645 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: ++ cat /run_command
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: + ARGS=
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: + sudo kolla_copy_cacerts
Dec 10 19:47:59 compute-0 systemd[1]: e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953-25462b2bd26decab.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 19:47:59 compute-0 systemd[1]: e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953-25462b2bd26decab.service: Failed with result 'exit-code'.
Dec 10 19:47:59 compute-0 sudo[224217]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 10 19:47:59 compute-0 sudo[224217]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 10 19:47:59 compute-0 sudo[224217]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 10 19:47:59 compute-0 sudo[224217]: pam_unix(sudo:session): session closed for user root
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: + [[ ! -n '' ]]
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: + . kolla_extend_start
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: + umask 0022
Dec 10 19:47:59 compute-0 ceilometer_agent_ipmi[224176]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec 10 19:47:59 compute-0 podman[203484]: time="2025-12-10T19:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:47:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28294 "" "Go-http-client/1.1"
Dec 10 19:47:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4250 "" "Go-http-client/1.1"
Dec 10 19:47:59 compute-0 sudo[224358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnkvjigranvhstdaygdvrvavhqgvchvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396079.4733746-521-185329866847154/AnsiballZ_systemd.py'
Dec 10 19:47:59 compute-0 sudo[224358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:00 compute-0 podman[224361]: 2025-12-10 19:48:00.119491063 +0000 UTC m=+0.101225237 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 10 19:48:00 compute-0 python3.9[224360]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.162 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.162 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.162 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.163 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.163 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.163 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.163 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.163 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.163 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.163 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.164 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.164 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.164 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.164 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.164 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.164 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.164 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.164 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.165 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.165 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.165 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.165 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.165 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.165 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.165 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.165 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.165 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.165 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.165 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.166 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.166 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.166 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.166 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.166 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.166 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.166 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.166 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.166 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.167 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.167 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.167 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.167 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.167 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.167 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.167 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.167 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.167 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.167 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.168 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.168 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.168 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.168 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.168 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.168 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.168 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.168 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.169 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.169 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.169 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.169 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.169 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.169 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.170 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.170 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.170 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.170 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.170 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.170 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.170 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.170 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.170 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.170 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.171 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.171 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.171 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.171 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.171 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.171 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.171 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.171 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.171 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.172 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.172 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.172 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.172 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.172 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.172 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.172 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.172 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.174 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.174 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.174 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.174 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.174 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.174 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.174 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.175 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.175 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.175 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.175 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.175 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.175 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.175 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.175 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.176 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.176 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.176 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.176 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.176 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.176 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.176 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.176 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.176 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.181 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.202 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 10 19:48:00 compute-0 systemd[1]: Stopping kepler container...
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.203 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.204 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.217 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpba_bithz/privsep.sock']
Dec 10 19:48:00 compute-0 sudo[224394]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpba_bithz/privsep.sock
Dec 10 19:48:00 compute-0 sudo[224394]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 10 19:48:00 compute-0 sudo[224394]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 10 19:48:00 compute-0 kepler[223923]: I1210 19:48:00.272960       1 exporter.go:218] Received shutdown signal
Dec 10 19:48:00 compute-0 kepler[223923]: I1210 19:48:00.273247       1 exporter.go:226] Exiting...
Dec 10 19:48:00 compute-0 systemd[1]: libpod-ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854.scope: Deactivated successfully.
Dec 10 19:48:00 compute-0 conmon[223923]: conmon ffb291adddf8400e9b3e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854.scope/container/memory.events
Dec 10 19:48:00 compute-0 podman[224385]: 2025-12-10 19:48:00.484136511 +0000 UTC m=+0.264285698 container died ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=edpm, name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 10 19:48:00 compute-0 systemd[1]: ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854-153ea58125408474.timer: Deactivated successfully.
Dec 10 19:48:00 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854.
Dec 10 19:48:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854-userdata-shm.mount: Deactivated successfully.
Dec 10 19:48:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b0314aa4d7102b48df9c54ab0d0251d01d6cbf680745ee727ad53e80ef74b5f-merged.mount: Deactivated successfully.
Dec 10 19:48:00 compute-0 podman[224385]: 2025-12-10 19:48:00.540981515 +0000 UTC m=+0.321130702 container cleanup ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, container_name=kepler, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible)
Dec 10 19:48:00 compute-0 podman[224385]: kepler
Dec 10 19:48:00 compute-0 podman[224419]: kepler
Dec 10 19:48:00 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec 10 19:48:00 compute-0 systemd[1]: Stopped kepler container.
Dec 10 19:48:00 compute-0 systemd[1]: Starting kepler container...
Dec 10 19:48:00 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:48:00 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854.
Dec 10 19:48:00 compute-0 podman[224431]: 2025-12-10 19:48:00.750201415 +0000 UTC m=+0.107892684 container init ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, container_name=kepler, config_id=edpm, name=ubi9, version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 10 19:48:00 compute-0 kepler[224447]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 10 19:48:00 compute-0 podman[224431]: 2025-12-10 19:48:00.776375752 +0000 UTC m=+0.134066991 container start ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, vendor=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, vcs-type=git, architecture=x86_64, io.buildah.version=1.29.0, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible)
Dec 10 19:48:00 compute-0 podman[224431]: kepler
Dec 10 19:48:00 compute-0 systemd[1]: Started kepler container.
Dec 10 19:48:00 compute-0 kepler[224447]: I1210 19:48:00.787745       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec 10 19:48:00 compute-0 kepler[224447]: I1210 19:48:00.787958       1 config.go:293] using gCgroup ID in the BPF program: true
Dec 10 19:48:00 compute-0 kepler[224447]: I1210 19:48:00.787997       1 config.go:295] kernel version: 5.14
Dec 10 19:48:00 compute-0 kepler[224447]: I1210 19:48:00.788854       1 power.go:78] Unable to obtain power, use estimate method
Dec 10 19:48:00 compute-0 kepler[224447]: I1210 19:48:00.788880       1 redfish.go:169] failed to get redfish credential file path
Dec 10 19:48:00 compute-0 kepler[224447]: I1210 19:48:00.789277       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec 10 19:48:00 compute-0 kepler[224447]: I1210 19:48:00.789285       1 power.go:79] using none to obtain power
Dec 10 19:48:00 compute-0 kepler[224447]: E1210 19:48:00.789303       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec 10 19:48:00 compute-0 kepler[224447]: E1210 19:48:00.789336       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec 10 19:48:00 compute-0 kepler[224447]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 10 19:48:00 compute-0 kepler[224447]: I1210 19:48:00.793905       1 exporter.go:84] Number of CPUs: 8
Dec 10 19:48:00 compute-0 sudo[224358]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:00 compute-0 podman[224457]: 2025-12-10 19:48:00.859897225 +0000 UTC m=+0.067884428 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, container_name=kepler, name=ubi9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_id=edpm, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, release=1214.1726694543, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 10 19:48:00 compute-0 systemd[1]: ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854-38f31bd5a78754dc.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 19:48:00 compute-0 systemd[1]: ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854-38f31bd5a78754dc.service: Failed with result 'exit-code'.
Dec 10 19:48:00 compute-0 sudo[224394]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.892 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.893 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpba_bithz/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.774 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.777 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.779 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 10 19:48:00 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:00.779 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.006 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.007 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.008 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.009 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.009 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.009 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.010 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.010 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.010 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.010 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.011 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.011 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.011 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.014 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.014 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.015 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.015 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.015 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.015 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.016 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.016 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.016 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.016 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.017 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.017 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.017 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.017 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.018 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.018 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.018 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.018 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.018 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.019 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.019 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.019 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.019 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.019 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.020 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.020 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.020 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.020 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.020 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.021 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.021 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.021 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.021 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.021 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.022 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.022 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.022 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.022 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.022 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.023 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.023 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.023 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.023 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.023 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.024 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.024 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.024 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.024 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.024 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.025 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.025 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.025 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.025 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.025 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.026 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.026 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.026 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.026 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.026 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.027 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.027 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.027 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.027 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.028 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.028 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.028 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.028 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.028 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.029 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.029 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.029 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.029 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.029 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.030 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.030 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.030 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.030 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.030 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.031 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.031 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.031 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.031 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.031 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.032 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.032 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.032 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.032 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.032 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.033 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.033 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.033 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.033 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.033 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.034 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.034 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.034 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.034 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.035 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.035 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.035 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.035 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.035 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.036 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.036 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.036 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.036 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.037 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.037 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.037 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.037 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.038 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.038 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.038 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.038 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.039 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.039 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.039 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.039 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.040 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.040 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.040 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.040 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.040 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.040 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.041 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.041 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.041 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.041 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.041 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.042 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.042 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.042 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.042 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.042 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.043 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.043 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.043 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.043 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.043 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.043 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.044 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.044 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.044 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.044 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.044 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.045 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.045 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.045 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.045 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.045 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.046 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.046 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.046 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.046 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.046 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.046 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.047 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.047 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.047 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.047 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.047 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.047 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.048 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.048 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.048 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.048 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.048 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.048 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.049 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.049 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.049 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.049 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.050 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.050 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.050 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.050 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.050 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.050 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.051 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.051 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.051 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.051 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.051 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.051 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.052 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.052 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.052 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.052 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.052 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.053 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.053 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.053 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.053 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.053 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.053 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.054 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.054 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.054 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.054 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.054 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.055 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.055 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.055 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 10 19:48:01 compute-0 ceilometer_agent_ipmi[224176]: 2025-12-10 19:48:01.057 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 10 19:48:01 compute-0 sudo[224635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bygvlukkxovpsetcaptknrgjicolmryt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396080.9739244-529-178523068068291/AnsiballZ_find.py'
Dec 10 19:48:01 compute-0 sudo[224635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.336109       1 watcher.go:83] Using in cluster k8s config
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.336159       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 10 19:48:01 compute-0 kepler[224447]: E1210 19:48:01.336213       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.340051       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.340103       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.344683       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.344712       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.351723       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.351760       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.351776       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.357932       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.357964       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.357968       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.357972       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.357978       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.357991       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.358068       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.358089       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.358107       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.358138       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.358285       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec 10 19:48:01 compute-0 kepler[224447]: I1210 19:48:01.359066       1 exporter.go:208] Started Kepler in 571.699531ms
Dec 10 19:48:01 compute-0 python3.9[224637]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 10 19:48:01 compute-0 openstack_network_exporter[205632]: ERROR   19:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:48:01 compute-0 openstack_network_exporter[205632]: ERROR   19:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:48:01 compute-0 openstack_network_exporter[205632]: ERROR   19:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:48:01 compute-0 openstack_network_exporter[205632]: ERROR   19:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:48:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:48:01 compute-0 openstack_network_exporter[205632]: ERROR   19:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:48:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:48:01 compute-0 sudo[224635]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:02 compute-0 sudo[224797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tueoouvybelnptpczmeqqvukpbnrpblf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396081.8597586-539-34483311482889/AnsiballZ_podman_container_info.py'
Dec 10 19:48:02 compute-0 sudo[224797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:02 compute-0 python3.9[224799]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 10 19:48:02 compute-0 sudo[224797]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:03 compute-0 sudo[224962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvnwijabrgpjhljzdpnggiunxohqbciv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396082.9770422-547-72449336993201/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:03 compute-0 sudo[224962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:03 compute-0 python3.9[224964]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:03 compute-0 systemd[1]: Started libpod-conmon-9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17.scope.
Dec 10 19:48:03 compute-0 podman[224965]: 2025-12-10 19:48:03.902204244 +0000 UTC m=+0.130127985 container exec 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 10 19:48:03 compute-0 podman[224965]: 2025-12-10 19:48:03.937723351 +0000 UTC m=+0.165647122 container exec_died 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:48:03 compute-0 systemd[1]: libpod-conmon-9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17.scope: Deactivated successfully.
Dec 10 19:48:03 compute-0 sudo[224962]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:04 compute-0 sudo[225149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpjasdhyqzyyepqhocbfowhpocxqsmfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396084.1797347-555-16715838570660/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:04 compute-0 sudo[225149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:04 compute-0 python3.9[225151]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:05 compute-0 systemd[1]: Started libpod-conmon-9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17.scope.
Dec 10 19:48:05 compute-0 podman[225152]: 2025-12-10 19:48:05.073556881 +0000 UTC m=+0.088553068 container exec 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:48:05 compute-0 podman[225152]: 2025-12-10 19:48:05.109484947 +0000 UTC m=+0.124481134 container exec_died 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:48:05 compute-0 systemd[1]: libpod-conmon-9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17.scope: Deactivated successfully.
Dec 10 19:48:05 compute-0 sudo[225149]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:05 compute-0 sudo[225334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csartwqhhcbwsshyhonpszxjieidpxeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396085.370741-563-100180929441958/AnsiballZ_file.py'
Dec 10 19:48:05 compute-0 sudo[225334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:05 compute-0 python3.9[225336]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:05 compute-0 sudo[225334]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:06 compute-0 sudo[225499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jafojkpnjjztkhukrnvaidbpnunmgxmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396086.1745048-572-117292772695514/AnsiballZ_podman_container_info.py'
Dec 10 19:48:06 compute-0 sudo[225499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:06 compute-0 podman[225460]: 2025-12-10 19:48:06.51955389 +0000 UTC m=+0.066588004 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:48:06 compute-0 python3.9[225506]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec 10 19:48:06 compute-0 sudo[225499]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:07 compute-0 sudo[225669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdqikchukygueqpaywhcappyempbbfng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396087.0125668-580-144707459532625/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:07 compute-0 sudo[225669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:07 compute-0 podman[225671]: 2025-12-10 19:48:07.454459321 +0000 UTC m=+0.069382998 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 19:48:07 compute-0 python3.9[225672]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:07 compute-0 systemd[1]: Started libpod-conmon-6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69.scope.
Dec 10 19:48:07 compute-0 podman[225696]: 2025-12-10 19:48:07.669652251 +0000 UTC m=+0.100608650 container exec 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 10 19:48:07 compute-0 podman[225696]: 2025-12-10 19:48:07.703124102 +0000 UTC m=+0.134080491 container exec_died 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 10 19:48:07 compute-0 systemd[1]: libpod-conmon-6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69.scope: Deactivated successfully.
Dec 10 19:48:07 compute-0 sudo[225669]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:08 compute-0 sudo[225875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvdjhabailaxtxtnsswgmtitcbvpkpiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396087.9312341-588-198872078460844/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:08 compute-0 sudo[225875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:08 compute-0 python3.9[225877]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:08 compute-0 systemd[1]: Started libpod-conmon-6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69.scope.
Dec 10 19:48:08 compute-0 podman[225878]: 2025-12-10 19:48:08.628557111 +0000 UTC m=+0.095880454 container exec 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:48:08 compute-0 podman[225878]: 2025-12-10 19:48:08.661066307 +0000 UTC m=+0.128389650 container exec_died 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:48:08 compute-0 systemd[1]: libpod-conmon-6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69.scope: Deactivated successfully.
Dec 10 19:48:08 compute-0 sudo[225875]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:09 compute-0 sudo[226060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kywpvirwdnmpzkvfrgtwpwezttdqkczl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396088.8842647-596-99849925666303/AnsiballZ_file.py'
Dec 10 19:48:09 compute-0 sudo[226060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:09 compute-0 python3.9[226062]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:09 compute-0 sudo[226060]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:10 compute-0 sudo[226227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhfwmucvsfasmyshfeycfqlmgmstqvvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396089.7436438-605-245863050598219/AnsiballZ_podman_container_info.py'
Dec 10 19:48:10 compute-0 sudo[226227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:10 compute-0 podman[226186]: 2025-12-10 19:48:10.167890983 +0000 UTC m=+0.149945919 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 10 19:48:10 compute-0 python3.9[226233]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec 10 19:48:10 compute-0 sudo[226227]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:10 compute-0 sudo[226402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdzyvllyjjrjbgnpjprsqswmmvqrapnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396090.5146885-613-249782687482148/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:10 compute-0 sudo[226402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:11 compute-0 python3.9[226404]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:11 compute-0 systemd[1]: Started libpod-conmon-b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7.scope.
Dec 10 19:48:11 compute-0 podman[226405]: 2025-12-10 19:48:11.186831014 +0000 UTC m=+0.106431108 container exec b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 10 19:48:11 compute-0 podman[226405]: 2025-12-10 19:48:11.220823812 +0000 UTC m=+0.140423956 container exec_died b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 10 19:48:11 compute-0 systemd[1]: libpod-conmon-b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7.scope: Deactivated successfully.
Dec 10 19:48:11 compute-0 sudo[226402]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:11 compute-0 sudo[226587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fizjxhwpfuayonufaawgkmhyqsmwycjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396091.4989624-621-82294254003216/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:11 compute-0 sudo[226587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:12 compute-0 python3.9[226589]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:12 compute-0 systemd[1]: Started libpod-conmon-b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7.scope.
Dec 10 19:48:12 compute-0 podman[226590]: 2025-12-10 19:48:12.207632465 +0000 UTC m=+0.108958038 container exec b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Dec 10 19:48:12 compute-0 podman[226590]: 2025-12-10 19:48:12.241368726 +0000 UTC m=+0.142694299 container exec_died b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 19:48:12 compute-0 systemd[1]: libpod-conmon-b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7.scope: Deactivated successfully.
Dec 10 19:48:12 compute-0 sudo[226587]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:12 compute-0 sudo[226768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfpadgdwgzlgojbuvoefcsvcgujhkcup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396092.5326383-629-94649971944846/AnsiballZ_file.py'
Dec 10 19:48:12 compute-0 sudo[226768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:13 compute-0 python3.9[226770]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:13 compute-0 sudo[226768]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:13 compute-0 sudo[226920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvuwpgxkowdqtlljqzilaqmsgvceyfns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396093.3600779-638-82994638386893/AnsiballZ_podman_container_info.py'
Dec 10 19:48:13 compute-0 sudo[226920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:13 compute-0 python3.9[226922]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 10 19:48:13 compute-0 sudo[226920]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:14 compute-0 sudo[227084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwrabmarsoywiofsbqqvelujmsecnbhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396094.1824696-646-77816140489207/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:14 compute-0 sudo[227084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:14 compute-0 python3.9[227086]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:14 compute-0 podman[227087]: 2025-12-10 19:48:14.759325025 +0000 UTC m=+0.077779037 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 10 19:48:14 compute-0 systemd[1]: Started libpod-conmon-84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1.scope.
Dec 10 19:48:14 compute-0 podman[227107]: 2025-12-10 19:48:14.87472846 +0000 UTC m=+0.102334845 container exec 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 10 19:48:14 compute-0 podman[227107]: 2025-12-10 19:48:14.909802438 +0000 UTC m=+0.137408723 container exec_died 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 10 19:48:14 compute-0 systemd[1]: libpod-conmon-84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1.scope: Deactivated successfully.
Dec 10 19:48:14 compute-0 sudo[227084]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:15 compute-0 sudo[227285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwjztawyakizvckjvzxfgbslxdbpiztz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396095.1719992-654-55406541019240/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:15 compute-0 sudo[227285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:15 compute-0 python3.9[227287]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:15 compute-0 systemd[1]: Started libpod-conmon-84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1.scope.
Dec 10 19:48:15 compute-0 podman[227288]: 2025-12-10 19:48:15.834034454 +0000 UTC m=+0.091825945 container exec 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251210, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4)
Dec 10 19:48:15 compute-0 podman[227288]: 2025-12-10 19:48:15.866876221 +0000 UTC m=+0.124667712 container exec_died 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251210, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 10 19:48:15 compute-0 systemd[1]: libpod-conmon-84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1.scope: Deactivated successfully.
Dec 10 19:48:15 compute-0 sudo[227285]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:16 compute-0 sudo[227468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plbimpvjvyhoinsprvkkpvmqhugqfztc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396096.1478555-662-42262296000206/AnsiballZ_file.py'
Dec 10 19:48:16 compute-0 sudo[227468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:16 compute-0 python3.9[227470]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:16 compute-0 sudo[227468]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:17 compute-0 sudo[227620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdgzmgadjqwrhasgtbsrciwovazkdgmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396097.0849285-671-75083905578146/AnsiballZ_podman_container_info.py'
Dec 10 19:48:17 compute-0 sudo[227620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:17 compute-0 python3.9[227622]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 10 19:48:17 compute-0 sudo[227620]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:18 compute-0 sudo[227785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnhodsfmapzwjukfjujksvfwivctalrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396097.990067-679-39193152029602/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:18 compute-0 sudo[227785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:18 compute-0 python3.9[227787]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:18 compute-0 systemd[1]: Started libpod-conmon-22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f.scope.
Dec 10 19:48:18 compute-0 podman[227788]: 2025-12-10 19:48:18.804066599 +0000 UTC m=+0.107705374 container exec 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 19:48:18 compute-0 podman[227788]: 2025-12-10 19:48:18.837345617 +0000 UTC m=+0.140984382 container exec_died 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 19:48:18 compute-0 systemd[1]: libpod-conmon-22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f.scope: Deactivated successfully.
Dec 10 19:48:18 compute-0 sudo[227785]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:19 compute-0 sudo[227967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvwhixgvopnwvxvsuwrdvmdvazgjzqti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396099.1206038-687-275610697815445/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:19 compute-0 sudo[227967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:19 compute-0 python3.9[227969]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:19 compute-0 systemd[1]: Started libpod-conmon-22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f.scope.
Dec 10 19:48:19 compute-0 podman[227971]: 2025-12-10 19:48:19.790831451 +0000 UTC m=+0.104482175 container exec 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:48:19 compute-0 podman[227971]: 2025-12-10 19:48:19.823117022 +0000 UTC m=+0.136767746 container exec_died 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 19:48:19 compute-0 systemd[1]: libpod-conmon-22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f.scope: Deactivated successfully.
Dec 10 19:48:19 compute-0 sudo[227967]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:20 compute-0 sudo[228150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oevudqszqtmrkbmquccmnejcreynbgsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396100.0533347-695-209026993653906/AnsiballZ_file.py'
Dec 10 19:48:20 compute-0 sudo[228150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:20 compute-0 python3.9[228152]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:20 compute-0 sudo[228150]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:21 compute-0 sudo[228302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzjkkqnzvzeejfbiomhtywgmggocbvdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396100.9312892-704-249713475428568/AnsiballZ_podman_container_info.py'
Dec 10 19:48:21 compute-0 sudo[228302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:21 compute-0 python3.9[228304]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 10 19:48:21 compute-0 sudo[228302]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:22 compute-0 podman[228417]: 2025-12-10 19:48:22.091197985 +0000 UTC m=+0.072408330 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, architecture=x86_64, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Dec 10 19:48:22 compute-0 sudo[228488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apclpdnltbczohjmpwbkaqulsmwtmggv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396101.820181-712-236936935527912/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:22 compute-0 sudo[228488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:22 compute-0 python3.9[228490]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:22 compute-0 systemd[1]: Started libpod-conmon-e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56.scope.
Dec 10 19:48:22 compute-0 podman[228491]: 2025-12-10 19:48:22.483685856 +0000 UTC m=+0.099905168 container exec e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 19:48:22 compute-0 podman[228491]: 2025-12-10 19:48:22.517532831 +0000 UTC m=+0.133752113 container exec_died e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 19:48:22 compute-0 systemd[1]: libpod-conmon-e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56.scope: Deactivated successfully.
Dec 10 19:48:22 compute-0 sudo[228488]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:23 compute-0 sudo[228670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhglnslannbumnphquedfnredximnvlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396102.7725835-720-231468748832563/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:23 compute-0 sudo[228670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:23 compute-0 python3.9[228672]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:48:23.357 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:48:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:48:23.357 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:48:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:48:23.357 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:48:23 compute-0 systemd[1]: Started libpod-conmon-e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56.scope.
Dec 10 19:48:23 compute-0 podman[228673]: 2025-12-10 19:48:23.439649179 +0000 UTC m=+0.103054695 container exec e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 19:48:23 compute-0 podman[228673]: 2025-12-10 19:48:23.471038925 +0000 UTC m=+0.134444411 container exec_died e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 19:48:23 compute-0 systemd[1]: libpod-conmon-e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56.scope: Deactivated successfully.
Dec 10 19:48:23 compute-0 sudo[228670]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:24 compute-0 podman[228819]: 2025-12-10 19:48:24.091965051 +0000 UTC m=+0.072932344 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 19:48:24 compute-0 sudo[228876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pekdariidmcrbdksjeiidhdpclgfbvod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396103.7612867-728-2116797292421/AnsiballZ_file.py'
Dec 10 19:48:24 compute-0 sudo[228876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:24 compute-0 python3.9[228879]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:24 compute-0 sudo[228876]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:24 compute-0 sudo[229029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bialrhhlluykcawjclrztpvbrnqvigix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396104.6345782-737-251462828440311/AnsiballZ_podman_container_info.py'
Dec 10 19:48:24 compute-0 sudo[229029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:25 compute-0 python3.9[229031]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 10 19:48:25 compute-0 sudo[229029]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:25 compute-0 sudo[229193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofllnbqwquchrtzxuwjyvdpkdosgcvkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396105.5288813-745-114675468621128/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:25 compute-0 sudo[229193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:26 compute-0 python3.9[229195]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:26 compute-0 systemd[1]: Started libpod-conmon-d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7.scope.
Dec 10 19:48:26 compute-0 podman[229196]: 2025-12-10 19:48:26.284067518 +0000 UTC m=+0.152646374 container exec d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal)
Dec 10 19:48:26 compute-0 podman[229196]: 2025-12-10 19:48:26.320793461 +0000 UTC m=+0.189372307 container exec_died d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.6, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 10 19:48:26 compute-0 systemd[1]: libpod-conmon-d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7.scope: Deactivated successfully.
Dec 10 19:48:26 compute-0 sudo[229193]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:27 compute-0 sudo[229377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npxvmrfqxwbyntxramuaaeqxojskeqer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396106.673606-753-252420213013820/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:27 compute-0 sudo[229377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:27 compute-0 python3.9[229379]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:27 compute-0 systemd[1]: Started libpod-conmon-d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7.scope.
Dec 10 19:48:27 compute-0 podman[229380]: 2025-12-10 19:48:27.51364439 +0000 UTC m=+0.122873322 container exec d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, version=9.6, io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc.)
Dec 10 19:48:27 compute-0 podman[229380]: 2025-12-10 19:48:27.547052913 +0000 UTC m=+0.156281825 container exec_died d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Dec 10 19:48:27 compute-0 systemd[1]: libpod-conmon-d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7.scope: Deactivated successfully.
Dec 10 19:48:27 compute-0 sudo[229377]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:28 compute-0 sudo[229559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehkutwlcqnzrnmfmtjmevuuatqtwpxoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396107.807919-761-32823107081852/AnsiballZ_file.py'
Dec 10 19:48:28 compute-0 sudo[229559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:28 compute-0 python3.9[229561]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:28 compute-0 sudo[229559]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:29 compute-0 sudo[229711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvwmkijsafmftrwiyyrdsgftydejppjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396108.6473172-770-251833463319321/AnsiballZ_podman_container_info.py'
Dec 10 19:48:29 compute-0 sudo[229711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:29 compute-0 python3.9[229713]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec 10 19:48:29 compute-0 sudo[229711]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:29 compute-0 podman[203484]: time="2025-12-10T19:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:48:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 10 19:48:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4257 "" "Go-http-client/1.1"
Dec 10 19:48:29 compute-0 sudo[229891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwonrjuvcudyjeroubhhqttrzaxhtmef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396109.5684156-778-42974298028287/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:29 compute-0 podman[229851]: 2025-12-10 19:48:29.943612201 +0000 UTC m=+0.065129608 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi)
Dec 10 19:48:29 compute-0 sudo[229891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:29 compute-0 systemd[1]: e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953-25462b2bd26decab.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 19:48:29 compute-0 systemd[1]: e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953-25462b2bd26decab.service: Failed with result 'exit-code'.
Dec 10 19:48:30 compute-0 python3.9[229897]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:30 compute-0 systemd[1]: Started libpod-conmon-e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953.scope.
Dec 10 19:48:30 compute-0 podman[229898]: 2025-12-10 19:48:30.255325504 +0000 UTC m=+0.089324357 container exec e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 10 19:48:30 compute-0 podman[229898]: 2025-12-10 19:48:30.287280536 +0000 UTC m=+0.121279369 container exec_died e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec 10 19:48:30 compute-0 podman[229913]: 2025-12-10 19:48:30.322023825 +0000 UTC m=+0.068875073 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 10 19:48:30 compute-0 systemd[1]: libpod-conmon-e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953.scope: Deactivated successfully.
Dec 10 19:48:30 compute-0 sudo[229891]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:30 compute-0 sudo[230095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffkgjkyzfsondlywwclwmbcaswmmmlhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396110.5453365-786-248040205519941/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:30 compute-0 sudo[230095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:31 compute-0 podman[230097]: 2025-12-10 19:48:31.054762317 +0000 UTC m=+0.089626465 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.29.0, release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler)
Dec 10 19:48:31 compute-0 python3.9[230098]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:31 compute-0 systemd[1]: Started libpod-conmon-e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953.scope.
Dec 10 19:48:31 compute-0 podman[230116]: 2025-12-10 19:48:31.3335586 +0000 UTC m=+0.102126749 container exec e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 10 19:48:31 compute-0 podman[230116]: 2025-12-10 19:48:31.365233045 +0000 UTC m=+0.133801174 container exec_died e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 10 19:48:31 compute-0 systemd[1]: libpod-conmon-e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953.scope: Deactivated successfully.
Dec 10 19:48:31 compute-0 openstack_network_exporter[205632]: ERROR   19:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:48:31 compute-0 openstack_network_exporter[205632]: ERROR   19:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:48:31 compute-0 openstack_network_exporter[205632]: ERROR   19:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:48:31 compute-0 openstack_network_exporter[205632]: ERROR   19:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:48:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:48:31 compute-0 openstack_network_exporter[205632]: ERROR   19:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:48:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:48:31 compute-0 sudo[230095]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:32 compute-0 sudo[230295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzpifexemrulhbzigzputycnmglgscil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396111.6780007-794-128789174865108/AnsiballZ_file.py'
Dec 10 19:48:32 compute-0 sudo[230295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:32 compute-0 python3.9[230297]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:32 compute-0 sudo[230295]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:32 compute-0 sudo[230447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ribdsnkfhzpgjhaukdwlizkplbkmzcno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396112.5279307-803-239832373021507/AnsiballZ_podman_container_info.py'
Dec 10 19:48:32 compute-0 sudo[230447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:33 compute-0 python3.9[230449]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec 10 19:48:33 compute-0 sudo[230447]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:33 compute-0 sudo[230612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqqxpeenqndbabmiipqrvhlzexvkctwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396113.4205942-811-148925174259401/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:33 compute-0 sudo[230612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:34 compute-0 python3.9[230614]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:34 compute-0 systemd[1]: Started libpod-conmon-ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854.scope.
Dec 10 19:48:34 compute-0 podman[230615]: 2025-12-10 19:48:34.221797069 +0000 UTC m=+0.110063479 container exec ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 10 19:48:34 compute-0 podman[230615]: 2025-12-10 19:48:34.255292944 +0000 UTC m=+0.143559334 container exec_died ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vcs-type=git, release=1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 10 19:48:34 compute-0 systemd[1]: libpod-conmon-ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854.scope: Deactivated successfully.
Dec 10 19:48:34 compute-0 sudo[230612]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:34 compute-0 sudo[230796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiegjcvsomztddmlgxpgxcnkdgizyhwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396114.5234249-819-225410592671227/AnsiballZ_podman_container_exec.py'
Dec 10 19:48:34 compute-0 sudo[230796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:35 compute-0 python3.9[230798]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 10 19:48:35 compute-0 systemd[1]: Started libpod-conmon-ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854.scope.
Dec 10 19:48:35 compute-0 podman[230799]: 2025-12-10 19:48:35.287299304 +0000 UTC m=+0.128996341 container exec ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, container_name=kepler, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543)
Dec 10 19:48:35 compute-0 podman[230799]: 2025-12-10 19:48:35.320058148 +0000 UTC m=+0.161755185 container exec_died ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, name=ubi9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.29.0, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., release=1214.1726694543, distribution-scope=public)
Dec 10 19:48:35 compute-0 systemd[1]: libpod-conmon-ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854.scope: Deactivated successfully.
Dec 10 19:48:35 compute-0 sudo[230796]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:35 compute-0 nova_compute[189279]: 2025-12-10 19:48:35.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:48:35 compute-0 nova_compute[189279]: 2025-12-10 19:48:35.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:48:36 compute-0 sudo[230980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhdqydrrxhizzgwhzpbauvmlqpukwpho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396115.6010847-827-263496547796213/AnsiballZ_file.py'
Dec 10 19:48:36 compute-0 sudo[230980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:36 compute-0 python3.9[230982]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:36 compute-0 sudo[230980]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:37 compute-0 sudo[231148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqedlpvrfuhvjzvhqkajjgmhfwkcdwpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396116.621587-836-164780954176053/AnsiballZ_file.py'
Dec 10 19:48:37 compute-0 sudo[231148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:37 compute-0 podman[231106]: 2025-12-10 19:48:37.1507122 +0000 UTC m=+0.143632866 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 10 19:48:37 compute-0 python3.9[231153]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:37 compute-0 sudo[231148]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:37 compute-0 nova_compute[189279]: 2025-12-10 19:48:37.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:48:37 compute-0 nova_compute[189279]: 2025-12-10 19:48:37.501 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:48:37 compute-0 nova_compute[189279]: 2025-12-10 19:48:37.501 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:48:37 compute-0 nova_compute[189279]: 2025-12-10 19:48:37.502 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 19:48:37 compute-0 nova_compute[189279]: 2025-12-10 19:48:37.514 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 10 19:48:37 compute-0 sudo[231325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owemiswkgdtqheflrzvzckxxdawfkwsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396117.5838497-844-243954113902776/AnsiballZ_stat.py'
Dec 10 19:48:38 compute-0 sudo[231325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:38 compute-0 podman[231277]: 2025-12-10 19:48:38.011302209 +0000 UTC m=+0.099356443 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 19:48:38 compute-0 python3.9[231327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:48:38 compute-0 sudo[231325]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.490 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.490 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.490 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.491 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.522 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.523 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.523 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.524 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:48:38 compute-0 sudo[231448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpnpsjpciriotrarvbhjkjeqyygyevqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396117.5838497-844-243954113902776/AnsiballZ_copy.py'
Dec 10 19:48:38 compute-0 sudo[231448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.887 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.888 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5662MB free_disk=72.4309196472168GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.889 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.889 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.961 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.962 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:48:38 compute-0 nova_compute[189279]: 2025-12-10 19:48:38.991 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:48:39 compute-0 nova_compute[189279]: 2025-12-10 19:48:39.007 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:48:39 compute-0 nova_compute[189279]: 2025-12-10 19:48:39.012 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:48:39 compute-0 nova_compute[189279]: 2025-12-10 19:48:39.013 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:48:39 compute-0 python3.9[231450]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765396117.5838497-844-243954113902776/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:39 compute-0 sudo[231448]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:39 compute-0 sudo[231600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqslcqhangijscvvffdvtqjiljqrymtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396119.338749-860-110464116929931/AnsiballZ_file.py'
Dec 10 19:48:39 compute-0 sudo[231600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:39 compute-0 python3.9[231602]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:39 compute-0 sudo[231600]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:40 compute-0 sudo[231768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emmlhkubwvuyrassuxmgnzdxazfyjfcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396120.094697-868-173120367243921/AnsiballZ_stat.py'
Dec 10 19:48:40 compute-0 sudo[231768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:40 compute-0 podman[231726]: 2025-12-10 19:48:40.49269019 +0000 UTC m=+0.105987387 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Dec 10 19:48:40 compute-0 python3.9[231775]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:48:40 compute-0 sudo[231768]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:41 compute-0 sudo[231855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbvnoltlgfjsczmeuwekecqanabzzcjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396120.094697-868-173120367243921/AnsiballZ_file.py'
Dec 10 19:48:41 compute-0 sudo[231855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:41 compute-0 python3.9[231857]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:41 compute-0 sudo[231855]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:41 compute-0 sudo[232007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hneqmrpnnlhmxlfjdezezvoojzxgnrcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396121.5035677-880-74971082045691/AnsiballZ_stat.py'
Dec 10 19:48:41 compute-0 sudo[232007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:42 compute-0 python3.9[232009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:48:42 compute-0 sudo[232007]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:42 compute-0 sudo[232085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnocujajnvehfdspbtlwtgzmxfdmkvjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396121.5035677-880-74971082045691/AnsiballZ_file.py'
Dec 10 19:48:42 compute-0 sudo[232085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:42 compute-0 python3.9[232087]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2mha45a0 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:42 compute-0 sudo[232085]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:43 compute-0 sudo[232237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybmgequktdvlembttpsoqkwogydpfgbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396122.7896957-892-247822688412751/AnsiballZ_stat.py'
Dec 10 19:48:43 compute-0 sudo[232237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:43 compute-0 python3.9[232239]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:48:43 compute-0 sudo[232237]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:43 compute-0 sudo[232315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezuirojducxvatltdllmrrqqkqyvgyqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396122.7896957-892-247822688412751/AnsiballZ_file.py'
Dec 10 19:48:43 compute-0 sudo[232315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:43 compute-0 python3.9[232317]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:43 compute-0 sudo[232315]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:44 compute-0 sudo[232467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oskiusnwhoyyuutvurnfzdzxcotxfyiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396124.1617024-905-84411195300249/AnsiballZ_command.py'
Dec 10 19:48:44 compute-0 sudo[232467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:44 compute-0 python3.9[232469]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:48:44 compute-0 sudo[232467]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:45 compute-0 podman[232530]: 2025-12-10 19:48:45.086538347 +0000 UTC m=+0.068416209 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm)
Dec 10 19:48:45 compute-0 sudo[232640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnigxlfwhxwxkyubfhfwtizrbuvzadeq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765396124.956665-913-216361713745025/AnsiballZ_edpm_nftables_from_files.py'
Dec 10 19:48:45 compute-0 sudo[232640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:45 compute-0 python3[232642]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 10 19:48:45 compute-0 sudo[232640]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:46 compute-0 sudo[232792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmvjimpormdmsjuxreohrulndjbjdxgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396125.907741-921-91133703936640/AnsiballZ_stat.py'
Dec 10 19:48:46 compute-0 sudo[232792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:46 compute-0 python3.9[232794]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:48:46 compute-0 sudo[232792]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:46 compute-0 sudo[232870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apimtwcridwstqziisrrdajvwasjowew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396125.907741-921-91133703936640/AnsiballZ_file.py'
Dec 10 19:48:46 compute-0 sudo[232870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:47 compute-0 python3.9[232872]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:47 compute-0 sudo[232870]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:47 compute-0 sudo[233022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulrldrplxoshdipaulocluqsumrozfvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396127.3883722-933-179872673446551/AnsiballZ_stat.py'
Dec 10 19:48:47 compute-0 sudo[233022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:47 compute-0 python3.9[233024]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:48:48 compute-0 sudo[233022]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:48 compute-0 sudo[233100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czjoemnjlcmvkhctqhgpkcslelhpadmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396127.3883722-933-179872673446551/AnsiballZ_file.py'
Dec 10 19:48:48 compute-0 sudo[233100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:48 compute-0 python3.9[233102]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:48 compute-0 sudo[233100]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:49 compute-0 sudo[233252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqxulkwlsxxqqcsbfsazcwconjneezlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396128.8223152-945-223613133261418/AnsiballZ_stat.py'
Dec 10 19:48:49 compute-0 sudo[233252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:49 compute-0 python3.9[233254]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:48:49 compute-0 sudo[233252]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:49 compute-0 sudo[233331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvjekqsigxetkfpgvjfuncsunmrmjwge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396128.8223152-945-223613133261418/AnsiballZ_file.py'
Dec 10 19:48:49 compute-0 sudo[233331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:49 compute-0 python3.9[233333]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:49 compute-0 sudo[233331]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:50 compute-0 sudo[233483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iigzilkicijghifqmkxuddngatmpvcac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396130.2593923-957-156794762156127/AnsiballZ_stat.py'
Dec 10 19:48:50 compute-0 sudo[233483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:50 compute-0 python3.9[233485]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:48:51 compute-0 sudo[233483]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:51 compute-0 sudo[233561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfornzensyqiturhsygjrbztpruzagze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396130.2593923-957-156794762156127/AnsiballZ_file.py'
Dec 10 19:48:51 compute-0 sudo[233561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:51 compute-0 python3.9[233563]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:51 compute-0 sudo[233561]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:52 compute-0 sudo[233726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdvnodbzvpxzpcgrkgflthwfpozqgnnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396131.863042-969-70683415048817/AnsiballZ_stat.py'
Dec 10 19:48:52 compute-0 sudo[233726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:52 compute-0 podman[233687]: 2025-12-10 19:48:52.449417852 +0000 UTC m=+0.111699874 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Dec 10 19:48:52 compute-0 python3.9[233731]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:48:52 compute-0 sudo[233726]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:53 compute-0 sudo[233859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfvricrynycogqwuvdthbolfwozwvzer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396131.863042-969-70683415048817/AnsiballZ_copy.py'
Dec 10 19:48:53 compute-0 sudo[233859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:53 compute-0 python3.9[233861]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765396131.863042-969-70683415048817/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:53 compute-0 sudo[233859]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:53 compute-0 sudo[234011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnlzqcdpriqvjhtaabtuhuzhvwdlsecd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396133.6135-984-12920432090619/AnsiballZ_file.py'
Dec 10 19:48:54 compute-0 sudo[234011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:54 compute-0 python3.9[234013]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:54 compute-0 sudo[234011]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:54 compute-0 podman[234137]: 2025-12-10 19:48:54.769787808 +0000 UTC m=+0.071501934 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:48:54 compute-0 sudo[234179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skeydrdnushcwiwwycoyvowobnpywhal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396134.4246917-992-186209921494527/AnsiballZ_command.py'
Dec 10 19:48:54 compute-0 sudo[234179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:54 compute-0 python3.9[234188]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:48:55 compute-0 sudo[234179]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:55 compute-0 sudo[234341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwynzprkogribhyweayfrtumnfojkyjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396135.2192495-1000-195355665593798/AnsiballZ_blockinfile.py'
Dec 10 19:48:55 compute-0 sudo[234341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:55 compute-0 python3.9[234343]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:55 compute-0 sudo[234341]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:56 compute-0 sudo[234493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzvdtizkdwdzzxisjhucbstpqntruinn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396136.2642667-1009-20899452521591/AnsiballZ_command.py'
Dec 10 19:48:56 compute-0 sudo[234493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:56 compute-0 python3.9[234495]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:48:56 compute-0 sudo[234493]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:57 compute-0 sudo[234646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuqaznxsiotfkjhvvxkuunewfasxyony ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396137.173426-1017-185657696350021/AnsiballZ_stat.py'
Dec 10 19:48:57 compute-0 sudo[234646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:57 compute-0 python3.9[234648]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 10 19:48:57 compute-0 sudo[234646]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:58 compute-0 sudo[234800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlqjkvgbetqmbheizjclzmwxijuudohu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396138.1022835-1025-145634636802203/AnsiballZ_command.py'
Dec 10 19:48:58 compute-0 sudo[234800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:58 compute-0 python3.9[234802]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:48:58 compute-0 sudo[234800]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:59 compute-0 sudo[234955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvqqgkwifhbbasuurplweasvcpygoick ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396138.9786901-1033-109859015379434/AnsiballZ_file.py'
Dec 10 19:48:59 compute-0 sudo[234955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:48:59 compute-0 python3.9[234957]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:48:59 compute-0 sudo[234955]: pam_unix(sudo:session): session closed for user root
Dec 10 19:48:59 compute-0 podman[203484]: time="2025-12-10T19:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:48:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec 10 19:48:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4253 "" "Go-http-client/1.1"
Dec 10 19:49:00 compute-0 sshd-session[214738]: Connection closed by 192.168.122.30 port 50676
Dec 10 19:49:00 compute-0 sshd-session[214735]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:49:00 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Dec 10 19:49:00 compute-0 systemd[1]: session-27.scope: Consumed 1min 27.201s CPU time.
Dec 10 19:49:00 compute-0 systemd-logind[789]: Session 27 logged out. Waiting for processes to exit.
Dec 10 19:49:00 compute-0 systemd-logind[789]: Removed session 27.
Dec 10 19:49:00 compute-0 podman[234982]: 2025-12-10 19:49:00.143361244 +0000 UTC m=+0.119631812 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 10 19:49:01 compute-0 podman[235002]: 2025-12-10 19:49:01.145709826 +0000 UTC m=+0.119859248 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 10 19:49:01 compute-0 podman[235020]: 2025-12-10 19:49:01.233989693 +0000 UTC m=+0.090574860 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, com.redhat.component=ubi9-container, container_name=kepler, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, release-0.7.12=, vendor=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 10 19:49:01 compute-0 openstack_network_exporter[205632]: ERROR   19:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:49:01 compute-0 openstack_network_exporter[205632]: ERROR   19:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:49:01 compute-0 openstack_network_exporter[205632]: ERROR   19:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:49:01 compute-0 openstack_network_exporter[205632]: ERROR   19:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:49:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:49:01 compute-0 openstack_network_exporter[205632]: ERROR   19:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:49:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:49:05 compute-0 sshd-session[235041]: Accepted publickey for zuul from 192.168.122.30 port 38714 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 19:49:05 compute-0 systemd-logind[789]: New session 28 of user zuul.
Dec 10 19:49:05 compute-0 systemd[1]: Started Session 28 of User zuul.
Dec 10 19:49:06 compute-0 sshd-session[235041]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:49:07 compute-0 python3.9[235194]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:49:08 compute-0 podman[235275]: 2025-12-10 19:49:08.121385267 +0000 UTC m=+0.094762095 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 19:49:08 compute-0 podman[235302]: 2025-12-10 19:49:08.210090916 +0000 UTC m=+0.068567514 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 19:49:08 compute-0 sudo[235392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieboorvnpanhcckgxnkadyboefeeuxhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396147.6782649-34-155536425051974/AnsiballZ_systemd.py'
Dec 10 19:49:08 compute-0 sudo[235392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:49:08 compute-0 python3.9[235394]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec 10 19:49:08 compute-0 sudo[235392]: pam_unix(sudo:session): session closed for user root
Dec 10 19:49:09 compute-0 sudo[235545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhltfjzsetgejtuaklrfaqxhgdofnrfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396148.9142098-42-189865543405710/AnsiballZ_setup.py'
Dec 10 19:49:09 compute-0 sudo[235545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:49:09 compute-0 python3.9[235547]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 10 19:49:09 compute-0 sudo[235545]: pam_unix(sudo:session): session closed for user root
Dec 10 19:49:10 compute-0 sudo[235629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dansukgxnurpuqcivkqgbrwrcxgtxjfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396148.9142098-42-189865543405710/AnsiballZ_dnf.py'
Dec 10 19:49:10 compute-0 sudo[235629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:49:10 compute-0 python3.9[235631]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 10 19:49:11 compute-0 podman[235633]: 2025-12-10 19:49:11.147358667 +0000 UTC m=+0.133932768 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:49:13 compute-0 sudo[235629]: pam_unix(sudo:session): session closed for user root
Dec 10 19:49:14 compute-0 sudo[235811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ergblwxhrfsdogdulohwxuszxlsgkjyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396153.659174-54-247404377283554/AnsiballZ_stat.py'
Dec 10 19:49:14 compute-0 sudo[235811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:49:14 compute-0 python3.9[235813]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:49:14 compute-0 sudo[235811]: pam_unix(sudo:session): session closed for user root
Dec 10 19:49:14 compute-0 sudo[235934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekiragzxfgvxsywpkwnibouguzfsahmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396153.659174-54-247404377283554/AnsiballZ_copy.py'
Dec 10 19:49:14 compute-0 sudo[235934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:49:15 compute-0 python3.9[235936]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765396153.659174-54-247404377283554/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:49:15 compute-0 sudo[235934]: pam_unix(sudo:session): session closed for user root
Dec 10 19:49:15 compute-0 sudo[236100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pezkfrheewkvammidftbwzfvcykpxogu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396155.317548-69-154392706522017/AnsiballZ_file.py'
Dec 10 19:49:15 compute-0 sudo[236100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:49:15 compute-0 podman[236060]: 2025-12-10 19:49:15.822021944 +0000 UTC m=+0.075197187 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2)
Dec 10 19:49:16 compute-0 python3.9[236105]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:49:16 compute-0 sudo[236100]: pam_unix(sudo:session): session closed for user root
Dec 10 19:49:16 compute-0 sudo[236256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsmjzqiiffmytebejqlstvqmcdppmpan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396156.218726-77-66263969095422/AnsiballZ_stat.py'
Dec 10 19:49:16 compute-0 sudo[236256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:49:16 compute-0 python3.9[236258]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 10 19:49:16 compute-0 sudo[236256]: pam_unix(sudo:session): session closed for user root
Dec 10 19:49:17 compute-0 sudo[236379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onmpgvtwydmnkbzultvekclosrtcpqgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396156.218726-77-66263969095422/AnsiballZ_copy.py'
Dec 10 19:49:17 compute-0 sudo[236379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:49:17 compute-0 python3.9[236381]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765396156.218726-77-66263969095422/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 10 19:49:17 compute-0 sudo[236379]: pam_unix(sudo:session): session closed for user root
Dec 10 19:49:18 compute-0 sudo[236531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehxboomparoblkzylofwttbscqthlxoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765396157.6315496-92-155402383546761/AnsiballZ_systemd.py'
Dec 10 19:49:18 compute-0 sudo[236531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:49:18 compute-0 python3.9[236533]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 10 19:49:18 compute-0 systemd[1]: Stopping System Logging Service...
Dec 10 19:49:18 compute-0 rsyslogd[1003]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1003" x-info="https://www.rsyslog.com"] exiting on signal 15.
Dec 10 19:49:18 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Dec 10 19:49:18 compute-0 systemd[1]: Stopped System Logging Service.
Dec 10 19:49:18 compute-0 systemd[1]: rsyslog.service: Consumed 4.590s CPU time, 10.2M memory peak, read 0B from disk, written 6.3M to disk.
Dec 10 19:49:18 compute-0 systemd[1]: Starting System Logging Service...
Dec 10 19:49:18 compute-0 rsyslogd[236537]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="236537" x-info="https://www.rsyslog.com"] start
Dec 10 19:49:18 compute-0 systemd[1]: Started System Logging Service.
Dec 10 19:49:18 compute-0 rsyslogd[236537]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 19:49:18 compute-0 rsyslogd[236537]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Dec 10 19:49:18 compute-0 rsyslogd[236537]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Dec 10 19:49:18 compute-0 rsyslogd[236537]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Dec 10 19:49:18 compute-0 sudo[236531]: pam_unix(sudo:session): session closed for user root
Dec 10 19:49:18 compute-0 rsyslogd[236537]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Dec 10 19:49:19 compute-0 sshd-session[235044]: Connection closed by 192.168.122.30 port 38714
Dec 10 19:49:19 compute-0 sshd-session[235041]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:49:19 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Dec 10 19:49:19 compute-0 systemd[1]: session-28.scope: Consumed 10.018s CPU time.
Dec 10 19:49:19 compute-0 systemd-logind[789]: Session 28 logged out. Waiting for processes to exit.
Dec 10 19:49:19 compute-0 systemd-logind[789]: Removed session 28.
Dec 10 19:49:23 compute-0 podman[236568]: 2025-12-10 19:49:23.120720798 +0000 UTC m=+0.094548784 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, version=9.6, name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 19:49:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:49:23.359 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:49:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:49:23.359 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:49:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:49:23.359 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:49:25 compute-0 podman[236588]: 2025-12-10 19:49:25.085562684 +0000 UTC m=+0.060056705 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 19:49:29 compute-0 podman[203484]: time="2025-12-10T19:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:49:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 19:49:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4256 "" "Go-http-client/1.1"
Dec 10 19:49:31 compute-0 podman[236611]: 2025-12-10 19:49:31.116564858 +0000 UTC m=+0.091097299 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Dec 10 19:49:31 compute-0 openstack_network_exporter[205632]: ERROR   19:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:49:31 compute-0 openstack_network_exporter[205632]: ERROR   19:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:49:31 compute-0 openstack_network_exporter[205632]: ERROR   19:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:49:31 compute-0 openstack_network_exporter[205632]: ERROR   19:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:49:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:49:31 compute-0 openstack_network_exporter[205632]: ERROR   19:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:49:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:49:32 compute-0 podman[236632]: 2025-12-10 19:49:32.114738395 +0000 UTC m=+0.088499160 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, name=ubi9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 10 19:49:32 compute-0 podman[236631]: 2025-12-10 19:49:32.12376827 +0000 UTC m=+0.103989370 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 10 19:49:34 compute-0 nova_compute[189279]: 2025-12-10 19:49:34.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:49:34 compute-0 nova_compute[189279]: 2025-12-10 19:49:34.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 10 19:49:34 compute-0 nova_compute[189279]: 2025-12-10 19:49:34.714 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 10 19:49:34 compute-0 nova_compute[189279]: 2025-12-10 19:49:34.717 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:49:34 compute-0 nova_compute[189279]: 2025-12-10 19:49:34.717 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 10 19:49:34 compute-0 nova_compute[189279]: 2025-12-10 19:49:34.737 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:49:36 compute-0 nova_compute[189279]: 2025-12-10 19:49:36.881 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:49:37 compute-0 nova_compute[189279]: 2025-12-10 19:49:37.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:49:38 compute-0 nova_compute[189279]: 2025-12-10 19:49:38.491 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:49:38 compute-0 nova_compute[189279]: 2025-12-10 19:49:38.491 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:49:38 compute-0 nova_compute[189279]: 2025-12-10 19:49:38.491 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:49:38 compute-0 nova_compute[189279]: 2025-12-10 19:49:38.491 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 19:49:38 compute-0 nova_compute[189279]: 2025-12-10 19:49:38.544 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 10 19:49:38 compute-0 nova_compute[189279]: 2025-12-10 19:49:38.545 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:49:39 compute-0 podman[236668]: 2025-12-10 19:49:39.1198393 +0000 UTC m=+0.097140845 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 19:49:39 compute-0 podman[236667]: 2025-12-10 19:49:39.133510872 +0000 UTC m=+0.114871356 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 10 19:49:39 compute-0 nova_compute[189279]: 2025-12-10 19:49:39.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:49:39 compute-0 nova_compute[189279]: 2025-12-10 19:49:39.490 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:49:39 compute-0 nova_compute[189279]: 2025-12-10 19:49:39.490 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:49:39 compute-0 nova_compute[189279]: 2025-12-10 19:49:39.490 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:49:40 compute-0 nova_compute[189279]: 2025-12-10 19:49:40.493 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:49:40 compute-0 nova_compute[189279]: 2025-12-10 19:49:40.693 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:49:40 compute-0 nova_compute[189279]: 2025-12-10 19:49:40.694 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:49:40 compute-0 nova_compute[189279]: 2025-12-10 19:49:40.694 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:49:40 compute-0 nova_compute[189279]: 2025-12-10 19:49:40.694 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.035 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.036 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5727MB free_disk=72.42661666870117GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.037 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.037 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.166 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.166 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.268 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing inventories for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.342 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating ProviderTree inventory for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.343 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating inventory in ProviderTree for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.365 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing aggregate associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.393 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing trait associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, traits: COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.421 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.437 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.439 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:49:41 compute-0 nova_compute[189279]: 2025-12-10 19:49:41.439 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:49:42 compute-0 podman[236709]: 2025-12-10 19:49:42.168889164 +0000 UTC m=+0.147603317 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.169 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.170 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.170 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.171 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.171 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.172 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.172 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.172 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.173 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.174 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.174 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.175 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.175 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.178 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.180 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.183 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.184 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.184 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.183 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.184 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.185 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.185 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.185 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'cpu': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.186 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.186 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.187 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.187 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'cpu': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'cpu': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'cpu': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'cpu': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.188 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.189 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.189 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.189 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.192 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.192 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.192 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.192 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.192 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.192 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.192 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.193 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.193 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.193 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.193 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:49:42.193 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:49:46 compute-0 podman[236736]: 2025-12-10 19:49:46.089283234 +0000 UTC m=+0.075116365 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 10 19:49:54 compute-0 podman[236756]: 2025-12-10 19:49:54.091975272 +0000 UTC m=+0.073929741 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1755695350, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 19:49:55 compute-0 sshd-session[236777]: Accepted publickey for zuul from 38.102.83.132 port 58712 ssh2: RSA SHA256:L/SCRhDD2hlgP35vi6MGkgCM80jHQm/zqk6LaU3Vz9U
Dec 10 19:49:55 compute-0 systemd-logind[789]: New session 29 of user zuul.
Dec 10 19:49:55 compute-0 systemd[1]: Started Session 29 of User zuul.
Dec 10 19:49:55 compute-0 sshd-session[236777]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 19:49:56 compute-0 podman[236779]: 2025-12-10 19:49:56.012229507 +0000 UTC m=+0.077067389 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 19:49:57 compute-0 python3[236977]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:49:58 compute-0 sudo[237198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iekwetsmmvglohwbqnjbmzkzyvqamjgc ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765396198.4666514-36700-201105112126111/AnsiballZ_command.py'
Dec 10 19:49:58 compute-0 sudo[237198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:49:59 compute-0 python3[237200]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:49:59 compute-0 sudo[237198]: pam_unix(sudo:session): session closed for user root
Dec 10 19:49:59 compute-0 podman[203484]: time="2025-12-10T19:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:49:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 19:49:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4263 "" "Go-http-client/1.1"
Dec 10 19:49:59 compute-0 sudo[237351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rirnjwnngnqjsplnhntkvtzauzenyegq ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765396199.6202722-36711-246234304544404/AnsiballZ_command.py'
Dec 10 19:49:59 compute-0 sudo[237351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:50:00 compute-0 python3[237353]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:50:01 compute-0 openstack_network_exporter[205632]: ERROR   19:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:50:01 compute-0 openstack_network_exporter[205632]: ERROR   19:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:50:01 compute-0 openstack_network_exporter[205632]: ERROR   19:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:50:01 compute-0 openstack_network_exporter[205632]: ERROR   19:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:50:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:50:01 compute-0 openstack_network_exporter[205632]: ERROR   19:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:50:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:50:01 compute-0 sudo[237351]: pam_unix(sudo:session): session closed for user root
Dec 10 19:50:02 compute-0 podman[237380]: 2025-12-10 19:50:02.130893426 +0000 UTC m=+0.101356450 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec 10 19:50:02 compute-0 podman[237409]: 2025-12-10 19:50:02.244905287 +0000 UTC m=+0.077408277 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 19:50:02 compute-0 podman[237419]: 2025-12-10 19:50:02.249531743 +0000 UTC m=+0.079550386 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Dec 10 19:50:02 compute-0 python3[237562]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 10 19:50:03 compute-0 sudo[237713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfnnfsxjaalnzuzijrthknhwuputvzpr ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765396203.2331102-36755-266801190960252/AnsiballZ_setup.py'
Dec 10 19:50:03 compute-0 sudo[237713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:50:03 compute-0 python3[237715]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 10 19:50:05 compute-0 sudo[237713]: pam_unix(sudo:session): session closed for user root
Dec 10 19:50:06 compute-0 sudo[237938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppyfhasgythxbpzdlyhldtwjikpspuzt ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765396205.7439382-36784-53725639244171/AnsiballZ_command.py'
Dec 10 19:50:06 compute-0 sudo[237938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:50:06 compute-0 python3[237940]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:50:06 compute-0 sudo[237938]: pam_unix(sudo:session): session closed for user root
Dec 10 19:50:07 compute-0 sudo[238104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egoamgmgmsjskvwnvmnuzafztotkhwjm ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765396206.8533375-36801-244149271850088/AnsiballZ_command.py'
Dec 10 19:50:07 compute-0 sudo[238104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 19:50:07 compute-0 python3[238106]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 19:50:07 compute-0 sudo[238104]: pam_unix(sudo:session): session closed for user root
Dec 10 19:50:10 compute-0 podman[238146]: 2025-12-10 19:50:10.109000943 +0000 UTC m=+0.083936754 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Dec 10 19:50:10 compute-0 podman[238147]: 2025-12-10 19:50:10.121342638 +0000 UTC m=+0.095352675 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 19:50:13 compute-0 podman[238188]: 2025-12-10 19:50:13.125889262 +0000 UTC m=+0.109245443 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:50:17 compute-0 podman[238213]: 2025-12-10 19:50:17.101206959 +0000 UTC m=+0.083072952 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 10 19:50:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:50:23.361 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:50:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:50:23.362 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:50:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:50:23.363 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:50:25 compute-0 podman[238234]: 2025-12-10 19:50:25.105539199 +0000 UTC m=+0.088383140 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=)
Dec 10 19:50:27 compute-0 podman[238254]: 2025-12-10 19:50:27.068436201 +0000 UTC m=+0.053240304 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:50:29 compute-0 podman[203484]: time="2025-12-10T19:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:50:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 19:50:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4262 "" "Go-http-client/1.1"
Dec 10 19:50:31 compute-0 openstack_network_exporter[205632]: ERROR   19:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:50:31 compute-0 openstack_network_exporter[205632]: ERROR   19:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:50:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:50:31 compute-0 openstack_network_exporter[205632]: ERROR   19:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:50:31 compute-0 openstack_network_exporter[205632]: ERROR   19:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:50:31 compute-0 openstack_network_exporter[205632]: ERROR   19:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:50:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:50:33 compute-0 podman[238278]: 2025-12-10 19:50:33.089229729 +0000 UTC m=+0.067337819 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 10 19:50:33 compute-0 podman[238277]: 2025-12-10 19:50:33.08961319 +0000 UTC m=+0.069255828 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 19:50:33 compute-0 podman[238279]: 2025-12-10 19:50:33.09916207 +0000 UTC m=+0.069520094 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, version=9.4, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, architecture=x86_64, vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec 10 19:50:38 compute-0 nova_compute[189279]: 2025-12-10 19:50:38.434 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:50:38 compute-0 nova_compute[189279]: 2025-12-10 19:50:38.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:50:39 compute-0 nova_compute[189279]: 2025-12-10 19:50:39.482 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:50:39 compute-0 nova_compute[189279]: 2025-12-10 19:50:39.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.482 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.504 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.504 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.504 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.516 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.517 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.517 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.517 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.543 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.543 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.543 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.543 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.849 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.850 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5723MB free_disk=72.42660522460938GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.851 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.851 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.915 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.916 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.940 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.956 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.958 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:50:40 compute-0 nova_compute[189279]: 2025-12-10 19:50:40.958 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:50:41 compute-0 podman[238335]: 2025-12-10 19:50:41.097742416 +0000 UTC m=+0.072041477 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 19:50:41 compute-0 podman[238334]: 2025-12-10 19:50:41.1347253 +0000 UTC m=+0.114932761 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec 10 19:50:41 compute-0 nova_compute[189279]: 2025-12-10 19:50:41.929 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:50:41 compute-0 nova_compute[189279]: 2025-12-10 19:50:41.930 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:50:44 compute-0 podman[238378]: 2025-12-10 19:50:44.104382562 +0000 UTC m=+0.087760356 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 10 19:50:48 compute-0 podman[238405]: 2025-12-10 19:50:48.07990811 +0000 UTC m=+0.061640216 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true)
Dec 10 19:50:56 compute-0 podman[238426]: 2025-12-10 19:50:56.10277194 +0000 UTC m=+0.076434349 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container)
Dec 10 19:50:58 compute-0 podman[238445]: 2025-12-10 19:50:58.079191344 +0000 UTC m=+0.061075832 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:50:59 compute-0 podman[203484]: time="2025-12-10T19:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:50:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 19:50:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4265 "" "Go-http-client/1.1"
Dec 10 19:51:01 compute-0 openstack_network_exporter[205632]: ERROR   19:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:51:01 compute-0 openstack_network_exporter[205632]: ERROR   19:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:51:01 compute-0 openstack_network_exporter[205632]: ERROR   19:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:51:01 compute-0 openstack_network_exporter[205632]: ERROR   19:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:51:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:51:01 compute-0 openstack_network_exporter[205632]: ERROR   19:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:51:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:51:04 compute-0 podman[238469]: 2025-12-10 19:51:04.091934399 +0000 UTC m=+0.072035519 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 10 19:51:04 compute-0 podman[238470]: 2025-12-10 19:51:04.107218884 +0000 UTC m=+0.080920532 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi)
Dec 10 19:51:04 compute-0 podman[238471]: 2025-12-10 19:51:04.144791372 +0000 UTC m=+0.112698854 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release=1214.1726694543, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., release-0.7.12=, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4)
Dec 10 19:51:07 compute-0 sshd-session[236792]: Received disconnect from 38.102.83.132 port 58712:11: disconnected by user
Dec 10 19:51:07 compute-0 sshd-session[236792]: Disconnected from user zuul 38.102.83.132 port 58712
Dec 10 19:51:07 compute-0 sshd-session[236777]: pam_unix(sshd:session): session closed for user zuul
Dec 10 19:51:07 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Dec 10 19:51:07 compute-0 systemd[1]: session-29.scope: Consumed 9.654s CPU time.
Dec 10 19:51:07 compute-0 systemd-logind[789]: Session 29 logged out. Waiting for processes to exit.
Dec 10 19:51:07 compute-0 systemd-logind[789]: Removed session 29.
Dec 10 19:51:12 compute-0 podman[238522]: 2025-12-10 19:51:12.100765584 +0000 UTC m=+0.077854964 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 10 19:51:12 compute-0 podman[238523]: 2025-12-10 19:51:12.121300573 +0000 UTC m=+0.093112470 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 19:51:14 compute-0 podman[238565]: 2025-12-10 19:51:14.779351826 +0000 UTC m=+0.111834341 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 10 19:51:19 compute-0 podman[238590]: 2025-12-10 19:51:19.119427698 +0000 UTC m=+0.106033715 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2)
Dec 10 19:51:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:51:23.363 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:51:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:51:23.363 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:51:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:51:23.363 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:51:27 compute-0 podman[238610]: 2025-12-10 19:51:27.100003414 +0000 UTC m=+0.079874630 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Dec 10 19:51:29 compute-0 podman[238630]: 2025-12-10 19:51:29.080143879 +0000 UTC m=+0.061748312 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 19:51:29 compute-0 podman[203484]: time="2025-12-10T19:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:51:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 19:51:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4265 "" "Go-http-client/1.1"
Dec 10 19:51:31 compute-0 openstack_network_exporter[205632]: ERROR   19:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:51:31 compute-0 openstack_network_exporter[205632]: ERROR   19:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:51:31 compute-0 openstack_network_exporter[205632]: ERROR   19:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:51:31 compute-0 openstack_network_exporter[205632]: ERROR   19:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:51:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:51:31 compute-0 openstack_network_exporter[205632]: ERROR   19:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:51:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:51:35 compute-0 podman[238654]: 2025-12-10 19:51:35.092699657 +0000 UTC m=+0.069867131 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 10 19:51:35 compute-0 podman[238655]: 2025-12-10 19:51:35.098638727 +0000 UTC m=+0.072912773 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 10 19:51:35 compute-0 podman[238656]: 2025-12-10 19:51:35.100954979 +0000 UTC m=+0.075556013 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, release-0.7.12=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 19:51:37 compute-0 nova_compute[189279]: 2025-12-10 19:51:37.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:51:39 compute-0 nova_compute[189279]: 2025-12-10 19:51:39.482 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:51:39 compute-0 nova_compute[189279]: 2025-12-10 19:51:39.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:51:39 compute-0 nova_compute[189279]: 2025-12-10 19:51:39.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:51:40 compute-0 nova_compute[189279]: 2025-12-10 19:51:40.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:51:40 compute-0 nova_compute[189279]: 2025-12-10 19:51:40.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:51:40 compute-0 nova_compute[189279]: 2025-12-10 19:51:40.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 19:51:40 compute-0 nova_compute[189279]: 2025-12-10 19:51:40.511 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 10 19:51:41 compute-0 nova_compute[189279]: 2025-12-10 19:51:41.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:51:41 compute-0 nova_compute[189279]: 2025-12-10 19:51:41.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:51:41 compute-0 nova_compute[189279]: 2025-12-10 19:51:41.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:51:41 compute-0 nova_compute[189279]: 2025-12-10 19:51:41.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.170 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.171 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.171 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.172 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.174 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15db350>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.176 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.176 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.176 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.177 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.178 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.178 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.178 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.180 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.181 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.181 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.181 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.181 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.181 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.181 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:51:42.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.521 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.521 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.522 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.522 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.828 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.829 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5704MB free_disk=72.42660522460938GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.830 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.830 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.897 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.898 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.923 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.940 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.942 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:51:42 compute-0 nova_compute[189279]: 2025-12-10 19:51:42.942 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:51:43 compute-0 podman[238709]: 2025-12-10 19:51:43.071768215 +0000 UTC m=+0.054798765 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 19:51:43 compute-0 podman[238708]: 2025-12-10 19:51:43.080407328 +0000 UTC m=+0.065913835 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 10 19:51:45 compute-0 podman[238751]: 2025-12-10 19:51:45.159070884 +0000 UTC m=+0.136375801 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 10 19:51:50 compute-0 podman[238777]: 2025-12-10 19:51:50.086276819 +0000 UTC m=+0.067218569 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251210, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 10 19:51:58 compute-0 podman[238797]: 2025-12-10 19:51:58.097782048 +0000 UTC m=+0.075736860 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1755695350, version=9.6, container_name=openstack_network_exporter)
Dec 10 19:51:59 compute-0 podman[203484]: time="2025-12-10T19:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:51:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 19:51:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4269 "" "Go-http-client/1.1"
Dec 10 19:52:00 compute-0 podman[238817]: 2025-12-10 19:52:00.090939493 +0000 UTC m=+0.070519098 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 19:52:01 compute-0 openstack_network_exporter[205632]: ERROR   19:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:52:01 compute-0 openstack_network_exporter[205632]: ERROR   19:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:52:01 compute-0 openstack_network_exporter[205632]: ERROR   19:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:52:01 compute-0 openstack_network_exporter[205632]: ERROR   19:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:52:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:52:01 compute-0 openstack_network_exporter[205632]: ERROR   19:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:52:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:52:06 compute-0 podman[238841]: 2025-12-10 19:52:06.100149012 +0000 UTC m=+0.078529894 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 10 19:52:06 compute-0 podman[238842]: 2025-12-10 19:52:06.127665012 +0000 UTC m=+0.085998165 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Dec 10 19:52:06 compute-0 podman[238848]: 2025-12-10 19:52:06.138290528 +0000 UTC m=+0.093235859 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.component=ubi9-container)
Dec 10 19:52:14 compute-0 podman[238896]: 2025-12-10 19:52:14.097415529 +0000 UTC m=+0.078983957 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3)
Dec 10 19:52:14 compute-0 podman[238897]: 2025-12-10 19:52:14.120171871 +0000 UTC m=+0.093671821 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 19:52:14 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:52:14.241 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 19:52:14 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:52:14.242 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 19:52:14 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:52:14.243 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:52:16 compute-0 podman[238939]: 2025-12-10 19:52:16.110462669 +0000 UTC m=+0.093289180 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:52:21 compute-0 podman[238965]: 2025-12-10 19:52:21.094169657 +0000 UTC m=+0.079143531 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 10 19:52:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:52:23.364 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:52:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:52:23.365 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:52:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:52:23.365 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:52:29 compute-0 podman[238984]: 2025-12-10 19:52:29.105340084 +0000 UTC m=+0.087226245 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Dec 10 19:52:29 compute-0 podman[203484]: time="2025-12-10T19:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:52:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 19:52:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4260 "" "Go-http-client/1.1"
Dec 10 19:52:31 compute-0 podman[239005]: 2025-12-10 19:52:31.1093249 +0000 UTC m=+0.093057646 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 19:52:31 compute-0 openstack_network_exporter[205632]: ERROR   19:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:52:31 compute-0 openstack_network_exporter[205632]: ERROR   19:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:52:31 compute-0 openstack_network_exporter[205632]: ERROR   19:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:52:31 compute-0 openstack_network_exporter[205632]: ERROR   19:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:52:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:52:31 compute-0 openstack_network_exporter[205632]: ERROR   19:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:52:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:52:37 compute-0 podman[239030]: 2025-12-10 19:52:37.097515865 +0000 UTC m=+0.073439962 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Dec 10 19:52:37 compute-0 podman[239031]: 2025-12-10 19:52:37.103272894 +0000 UTC m=+0.076817765 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4)
Dec 10 19:52:37 compute-0 podman[239029]: 2025-12-10 19:52:37.118133227 +0000 UTC m=+0.100234766 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 10 19:52:37 compute-0 nova_compute[189279]: 2025-12-10 19:52:37.943 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:52:39 compute-0 nova_compute[189279]: 2025-12-10 19:52:39.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:52:40 compute-0 nova_compute[189279]: 2025-12-10 19:52:40.482 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:52:40 compute-0 nova_compute[189279]: 2025-12-10 19:52:40.503 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:52:40 compute-0 nova_compute[189279]: 2025-12-10 19:52:40.503 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:52:40 compute-0 nova_compute[189279]: 2025-12-10 19:52:40.503 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 19:52:40 compute-0 nova_compute[189279]: 2025-12-10 19:52:40.513 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 10 19:52:41 compute-0 nova_compute[189279]: 2025-12-10 19:52:41.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:52:41 compute-0 nova_compute[189279]: 2025-12-10 19:52:41.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:52:41 compute-0 nova_compute[189279]: 2025-12-10 19:52:41.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:52:41 compute-0 nova_compute[189279]: 2025-12-10 19:52:41.490 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:52:41 compute-0 nova_compute[189279]: 2025-12-10 19:52:41.491 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:52:42 compute-0 nova_compute[189279]: 2025-12-10 19:52:42.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.520 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.521 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.521 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.521 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:52:44 compute-0 podman[239086]: 2025-12-10 19:52:44.742688744 +0000 UTC m=+0.063412462 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 19:52:44 compute-0 podman[239085]: 2025-12-10 19:52:44.771621179 +0000 UTC m=+0.098380355 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.846 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.848 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5706MB free_disk=72.42709732055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.848 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.848 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.904 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.904 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.926 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.945 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.947 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:52:44 compute-0 nova_compute[189279]: 2025-12-10 19:52:44.947 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:52:47 compute-0 podman[239128]: 2025-12-10 19:52:47.137490657 +0000 UTC m=+0.123134361 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Dec 10 19:52:52 compute-0 podman[239153]: 2025-12-10 19:52:52.092144569 +0000 UTC m=+0.074145361 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 10 19:52:57 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:52:57.004 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 19:52:57 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:52:57.004 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 19:52:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:52:59.007 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:52:59 compute-0 podman[203484]: time="2025-12-10T19:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:52:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 19:52:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4259 "" "Go-http-client/1.1"
Dec 10 19:53:00 compute-0 podman[239173]: 2025-12-10 19:53:00.08116409 +0000 UTC m=+0.062750655 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Dec 10 19:53:01 compute-0 openstack_network_exporter[205632]: ERROR   19:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:53:01 compute-0 openstack_network_exporter[205632]: ERROR   19:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:53:01 compute-0 openstack_network_exporter[205632]: ERROR   19:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:53:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:53:01 compute-0 openstack_network_exporter[205632]: ERROR   19:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:53:01 compute-0 openstack_network_exporter[205632]: ERROR   19:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:53:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:53:02 compute-0 podman[239193]: 2025-12-10 19:53:02.083035377 +0000 UTC m=+0.065264024 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 19:53:06 compute-0 nova_compute[189279]: 2025-12-10 19:53:06.997 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "12986b74-7b15-4ff4-9019-081950660d4b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:06 compute-0 nova_compute[189279]: 2025-12-10 19:53:06.998 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.013 189283 DEBUG nova.compute.manager [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.127 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.127 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.136 189283 DEBUG nova.virt.hardware [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.136 189283 INFO nova.compute.claims [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Claim successful on node compute-0.ctlplane.example.com
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.240 189283 DEBUG nova.compute.provider_tree [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.254 189283 DEBUG nova.scheduler.client.report [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.271 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.272 189283 DEBUG nova.compute.manager [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.305 189283 DEBUG nova.compute.manager [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.306 189283 DEBUG nova.network.neutron [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.324 189283 INFO nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.351 189283 DEBUG nova.compute.manager [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.433 189283 DEBUG nova.compute.manager [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.435 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.436 189283 INFO nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Creating image(s)
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.437 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "/var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.437 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.438 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.439 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "193edf3941027c090c206b4992bbea3ae5563eb9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:07 compute-0 nova_compute[189279]: 2025-12-10 19:53:07.439 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "193edf3941027c090c206b4992bbea3ae5563eb9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:08 compute-0 podman[239218]: 2025-12-10 19:53:08.090467365 +0000 UTC m=+0.073102341 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 10 19:53:08 compute-0 podman[239219]: 2025-12-10 19:53:08.093564941 +0000 UTC m=+0.071956619 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, config_id=edpm, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Dec 10 19:53:08 compute-0 nova_compute[189279]: 2025-12-10 19:53:08.094 189283 WARNING oslo_policy.policy [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 10 19:53:08 compute-0 nova_compute[189279]: 2025-12-10 19:53:08.094 189283 WARNING oslo_policy.policy [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 10 19:53:08 compute-0 podman[239217]: 2025-12-10 19:53:08.099708452 +0000 UTC m=+0.085760663 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 10 19:53:08 compute-0 nova_compute[189279]: 2025-12-10 19:53:08.759 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:08 compute-0 nova_compute[189279]: 2025-12-10 19:53:08.815 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9.part --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:08 compute-0 nova_compute[189279]: 2025-12-10 19:53:08.817 189283 DEBUG nova.virt.images [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] 06e6231d-0a77-4b09-acb3-e7faf5a777be was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 10 19:53:08 compute-0 nova_compute[189279]: 2025-12-10 19:53:08.818 189283 DEBUG nova.privsep.utils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 10 19:53:08 compute-0 nova_compute[189279]: 2025-12-10 19:53:08.819 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9.part /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.005 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9.part /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9.converted" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.014 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.110 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9.converted --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.111 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "193edf3941027c090c206b4992bbea3ae5563eb9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.128 189283 INFO oslo.privsep.daemon [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpvbendwlw/privsep.sock']
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.143 189283 DEBUG nova.network.neutron [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Successfully created port: 20b76af1-42c6-4b7d-a834-c20e017b3e8d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.791 189283 INFO oslo.privsep.daemon [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Spawned new privsep daemon via rootwrap
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.668 239292 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.672 239292 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.674 239292 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.674 239292 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239292
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.864 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.923 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.925 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "193edf3941027c090c206b4992bbea3ae5563eb9" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.927 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "193edf3941027c090c206b4992bbea3ae5563eb9" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:09 compute-0 nova_compute[189279]: 2025-12-10 19:53:09.953 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.012 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.013 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9,backing_fmt=raw /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.052 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9,backing_fmt=raw /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.054 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "193edf3941027c090c206b4992bbea3ae5563eb9" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.054 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.111 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.112 189283 DEBUG nova.virt.disk.api [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Checking if we can resize image /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.113 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.170 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.171 189283 DEBUG nova.virt.disk.api [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Cannot resize image /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.172 189283 DEBUG nova.objects.instance [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'migration_context' on Instance uuid 12986b74-7b15-4ff4-9019-081950660d4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.192 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "/var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.193 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.194 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.194 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.195 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.195 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.218 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.219 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.257 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.258 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.270 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.289 189283 DEBUG nova.network.neutron [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Successfully updated port: 20b76af1-42c6-4b7d-a834-c20e017b3e8d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.307 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.308 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquired lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.309 189283 DEBUG nova.network.neutron [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.326 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.327 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.328 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.339 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.391 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.393 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.428 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.429 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.430 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.444 189283 DEBUG nova.network.neutron [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.485 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.486 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.486 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Ensure instance console log exists: /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.486 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.487 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.487 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.755 189283 DEBUG nova.compute.manager [req-2dc29e93-0019-40a6-812b-3a2bd5dc3d21 req-f4660f9a-18e5-431c-8dbc-bd08d02a2a1b 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Received event network-changed-20b76af1-42c6-4b7d-a834-c20e017b3e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.756 189283 DEBUG nova.compute.manager [req-2dc29e93-0019-40a6-812b-3a2bd5dc3d21 req-f4660f9a-18e5-431c-8dbc-bd08d02a2a1b 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Refreshing instance network info cache due to event network-changed-20b76af1-42c6-4b7d-a834-c20e017b3e8d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 19:53:10 compute-0 nova_compute[189279]: 2025-12-10 19:53:10.756 189283 DEBUG oslo_concurrency.lockutils [req-2dc29e93-0019-40a6-812b-3a2bd5dc3d21 req-f4660f9a-18e5-431c-8dbc-bd08d02a2a1b 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.188 189283 DEBUG nova.network.neutron [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updating instance_info_cache with network_info: [{"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.210 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Releasing lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.211 189283 DEBUG nova.compute.manager [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Instance network_info: |[{"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.211 189283 DEBUG oslo_concurrency.lockutils [req-2dc29e93-0019-40a6-812b-3a2bd5dc3d21 req-f4660f9a-18e5-431c-8dbc-bd08d02a2a1b 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.212 189283 DEBUG nova.network.neutron [req-2dc29e93-0019-40a6-812b-3a2bd5dc3d21 req-f4660f9a-18e5-431c-8dbc-bd08d02a2a1b 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Refreshing network info cache for port 20b76af1-42c6-4b7d-a834-c20e017b3e8d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.215 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Start _get_guest_xml network_info=[{"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-10T19:52:04Z,direct_url=<?>,disk_format='qcow2',id=06e6231d-0a77-4b09-acb3-e7faf5a777be,min_disk=0,min_ram=0,name='cirros',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-10T19:52:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 1, 'encryption_options': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.224 189283 WARNING nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.233 189283 DEBUG nova.virt.libvirt.host [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.234 189283 DEBUG nova.virt.libvirt.host [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.239 189283 DEBUG nova.virt.libvirt.host [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.239 189283 DEBUG nova.virt.libvirt.host [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.240 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.241 189283 DEBUG nova.virt.hardware [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T19:52:09Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='0fc2e5b1-b522-4c52-bdef-97db09e458e4',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-10T19:52:04Z,direct_url=<?>,disk_format='qcow2',id=06e6231d-0a77-4b09-acb3-e7faf5a777be,min_disk=0,min_ram=0,name='cirros',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-10T19:52:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.241 189283 DEBUG nova.virt.hardware [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.241 189283 DEBUG nova.virt.hardware [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.242 189283 DEBUG nova.virt.hardware [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.242 189283 DEBUG nova.virt.hardware [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.242 189283 DEBUG nova.virt.hardware [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.243 189283 DEBUG nova.virt.hardware [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.243 189283 DEBUG nova.virt.hardware [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.243 189283 DEBUG nova.virt.hardware [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.244 189283 DEBUG nova.virt.hardware [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.244 189283 DEBUG nova.virt.hardware [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.247 189283 DEBUG nova.privsep.utils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.248 189283 DEBUG nova.virt.libvirt.vif [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T19:53:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-0icu4z6t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T19:53:07Z,user_data=None,user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=12986b74-7b15-4ff4-9019-081950660d4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.249 189283 DEBUG nova.network.os_vif_util [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.250 189283 DEBUG nova.network.os_vif_util [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:2e:35,bridge_name='br-int',has_traffic_filtering=True,id=20b76af1-42c6-4b7d-a834-c20e017b3e8d,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20b76af1-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.251 189283 DEBUG nova.objects.instance [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'pci_devices' on Instance uuid 12986b74-7b15-4ff4-9019-081950660d4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.267 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] End _get_guest_xml xml=<domain type="kvm">
Dec 10 19:53:11 compute-0 nova_compute[189279]:   <uuid>12986b74-7b15-4ff4-9019-081950660d4b</uuid>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   <name>instance-00000001</name>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   <memory>524288</memory>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   <metadata>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <nova:name>test_0</nova:name>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 19:53:11</nova:creationTime>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <nova:flavor name="m1.small">
Dec 10 19:53:11 compute-0 nova_compute[189279]:         <nova:memory>512</nova:memory>
Dec 10 19:53:11 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 19:53:11 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 19:53:11 compute-0 nova_compute[189279]:         <nova:ephemeral>1</nova:ephemeral>
Dec 10 19:53:11 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 19:53:11 compute-0 nova_compute[189279]:         <nova:user uuid="2143e69e49fd49db99c8737c973c1ea5">admin</nova:user>
Dec 10 19:53:11 compute-0 nova_compute[189279]:         <nova:project uuid="fe518ea62a94467e823b2b1046c57a2e">admin</nova:project>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="06e6231d-0a77-4b09-acb3-e7faf5a777be"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 19:53:11 compute-0 nova_compute[189279]:         <nova:port uuid="20b76af1-42c6-4b7d-a834-c20e017b3e8d">
Dec 10 19:53:11 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="192.168.0.139" ipVersion="4"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   </metadata>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <system>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <entry name="serial">12986b74-7b15-4ff4-9019-081950660d4b</entry>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <entry name="uuid">12986b74-7b15-4ff4-9019-081950660d4b</entry>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     </system>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   <os>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   </os>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   <features>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <apic/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   </features>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   </clock>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   </cpu>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   <devices>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <target dev="vdb" bus="virtio"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.config"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:96:2e:35"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <target dev="tap20b76af1-42"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     </interface>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/console.log" append="off"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     </serial>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <video>
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     </video>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     </rng>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 19:53:11 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 19:53:11 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 19:53:11 compute-0 nova_compute[189279]:   </devices>
Dec 10 19:53:11 compute-0 nova_compute[189279]: </domain>
Dec 10 19:53:11 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.269 189283 DEBUG nova.compute.manager [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Preparing to wait for external event network-vif-plugged-20b76af1-42c6-4b7d-a834-c20e017b3e8d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.269 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "12986b74-7b15-4ff4-9019-081950660d4b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.270 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.270 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.271 189283 DEBUG nova.virt.libvirt.vif [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T19:53:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-0icu4z6t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T19:53:07Z,user_data=None,user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=12986b74-7b15-4ff4-9019-081950660d4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.271 189283 DEBUG nova.network.os_vif_util [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.271 189283 DEBUG nova.network.os_vif_util [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:2e:35,bridge_name='br-int',has_traffic_filtering=True,id=20b76af1-42c6-4b7d-a834-c20e017b3e8d,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20b76af1-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.272 189283 DEBUG os_vif [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:2e:35,bridge_name='br-int',has_traffic_filtering=True,id=20b76af1-42c6-4b7d-a834-c20e017b3e8d,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20b76af1-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.304 189283 DEBUG ovsdbapp.backend.ovs_idl [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.304 189283 DEBUG ovsdbapp.backend.ovs_idl [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.304 189283 DEBUG ovsdbapp.backend.ovs_idl [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.305 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.305 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.305 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.306 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.308 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.311 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.321 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.321 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.321 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 19:53:11 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.322 189283 INFO oslo.privsep.daemon [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp1gtj3j4w/privsep.sock']
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.041 189283 INFO oslo.privsep.daemon [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Spawned new privsep daemon via rootwrap
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.864 239329 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.868 239329 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.870 239329 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:11.870 239329 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239329
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.340 189283 DEBUG nova.network.neutron [req-2dc29e93-0019-40a6-812b-3a2bd5dc3d21 req-f4660f9a-18e5-431c-8dbc-bd08d02a2a1b 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updated VIF entry in instance network info cache for port 20b76af1-42c6-4b7d-a834-c20e017b3e8d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.341 189283 DEBUG nova.network.neutron [req-2dc29e93-0019-40a6-812b-3a2bd5dc3d21 req-f4660f9a-18e5-431c-8dbc-bd08d02a2a1b 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updating instance_info_cache with network_info: [{"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.358 189283 DEBUG oslo_concurrency.lockutils [req-2dc29e93-0019-40a6-812b-3a2bd5dc3d21 req-f4660f9a-18e5-431c-8dbc-bd08d02a2a1b 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.399 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.400 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20b76af1-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.401 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20b76af1-42, col_values=(('external_ids', {'iface-id': '20b76af1-42c6-4b7d-a834-c20e017b3e8d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:2e:35', 'vm-uuid': '12986b74-7b15-4ff4-9019-081950660d4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.403 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:12 compute-0 NetworkManager[56238]: <info>  [1765396392.4042] manager: (tap20b76af1-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.409 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.413 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.414 189283 INFO os_vif [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:2e:35,bridge_name='br-int',has_traffic_filtering=True,id=20b76af1-42c6-4b7d-a834-c20e017b3e8d,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20b76af1-42')
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.695 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.696 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.696 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.696 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No VIF found with MAC fa:16:3e:96:2e:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 19:53:12 compute-0 nova_compute[189279]: 2025-12-10 19:53:12.697 189283 INFO nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Using config drive
Dec 10 19:53:13 compute-0 nova_compute[189279]: 2025-12-10 19:53:13.231 189283 INFO nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Creating config drive at /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.config
Dec 10 19:53:13 compute-0 nova_compute[189279]: 2025-12-10 19:53:13.235 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnfe2qcuu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:13 compute-0 nova_compute[189279]: 2025-12-10 19:53:13.407 189283 DEBUG oslo_concurrency.processutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnfe2qcuu" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:13 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec 10 19:53:13 compute-0 NetworkManager[56238]: <info>  [1765396393.5169] manager: (tap20b76af1-42): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Dec 10 19:53:13 compute-0 kernel: tap20b76af1-42: entered promiscuous mode
Dec 10 19:53:13 compute-0 nova_compute[189279]: 2025-12-10 19:53:13.520 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:13 compute-0 ovn_controller[97701]: 2025-12-10T19:53:13Z|00027|binding|INFO|Claiming lport 20b76af1-42c6-4b7d-a834-c20e017b3e8d for this chassis.
Dec 10 19:53:13 compute-0 ovn_controller[97701]: 2025-12-10T19:53:13Z|00028|binding|INFO|20b76af1-42c6-4b7d-a834-c20e017b3e8d: Claiming fa:16:3e:96:2e:35 192.168.0.139
Dec 10 19:53:13 compute-0 nova_compute[189279]: 2025-12-10 19:53:13.524 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:13 compute-0 systemd-udevd[239357]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 19:53:13 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:13.552 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:2e:35 192.168.0.139'], port_security=['fa:16:3e:96:2e:35 192.168.0.139'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.139/24', 'neutron:device_id': '12986b74-7b15-4ff4-9019-081950660d4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe518ea62a94467e823b2b1046c57a2e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dbfca0ae-785e-46d9-973d-467188fc7f6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a76816-4d6e-46b1-bfa0-919a54b6e056, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=20b76af1-42c6-4b7d-a834-c20e017b3e8d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 19:53:13 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:13.553 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 20b76af1-42c6-4b7d-a834-c20e017b3e8d in datapath e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 bound to our chassis
Dec 10 19:53:13 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:13.555 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e55a1ff5-f742-4bad-ae9c-2f6d4795fa29
Dec 10 19:53:13 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:13.557 106564 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmptlhk915t/privsep.sock']
Dec 10 19:53:13 compute-0 NetworkManager[56238]: <info>  [1765396393.5735] device (tap20b76af1-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 19:53:13 compute-0 NetworkManager[56238]: <info>  [1765396393.5749] device (tap20b76af1-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 19:53:13 compute-0 nova_compute[189279]: 2025-12-10 19:53:13.611 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:13 compute-0 systemd-machined[155642]: New machine qemu-1-instance-00000001.
Dec 10 19:53:13 compute-0 ovn_controller[97701]: 2025-12-10T19:53:13Z|00029|binding|INFO|Setting lport 20b76af1-42c6-4b7d-a834-c20e017b3e8d ovn-installed in OVS
Dec 10 19:53:13 compute-0 ovn_controller[97701]: 2025-12-10T19:53:13Z|00030|binding|INFO|Setting lport 20b76af1-42c6-4b7d-a834-c20e017b3e8d up in Southbound
Dec 10 19:53:13 compute-0 nova_compute[189279]: 2025-12-10 19:53:13.623 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:13 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.023 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396394.0225077, 12986b74-7b15-4ff4-9019-081950660d4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.024 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] VM Started (Lifecycle Event)
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.068 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.074 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396394.0227602, 12986b74-7b15-4ff4-9019-081950660d4b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.075 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] VM Paused (Lifecycle Event)
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.095 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.100 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.117 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.153 189283 DEBUG nova.compute.manager [req-a32ecd0c-4273-4912-b2d4-32dccc641fc0 req-6ff31d2b-0095-4f8b-a1e5-055cb817cbf5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Received event network-vif-plugged-20b76af1-42c6-4b7d-a834-c20e017b3e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.154 189283 DEBUG oslo_concurrency.lockutils [req-a32ecd0c-4273-4912-b2d4-32dccc641fc0 req-6ff31d2b-0095-4f8b-a1e5-055cb817cbf5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "12986b74-7b15-4ff4-9019-081950660d4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.154 189283 DEBUG oslo_concurrency.lockutils [req-a32ecd0c-4273-4912-b2d4-32dccc641fc0 req-6ff31d2b-0095-4f8b-a1e5-055cb817cbf5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.154 189283 DEBUG oslo_concurrency.lockutils [req-a32ecd0c-4273-4912-b2d4-32dccc641fc0 req-6ff31d2b-0095-4f8b-a1e5-055cb817cbf5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.155 189283 DEBUG nova.compute.manager [req-a32ecd0c-4273-4912-b2d4-32dccc641fc0 req-6ff31d2b-0095-4f8b-a1e5-055cb817cbf5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Processing event network-vif-plugged-20b76af1-42c6-4b7d-a834-c20e017b3e8d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.155 189283 DEBUG nova.compute.manager [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.171 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396394.170827, 12986b74-7b15-4ff4-9019-081950660d4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.172 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] VM Resumed (Lifecycle Event)
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.185 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.191 189283 INFO nova.virt.libvirt.driver [-] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Instance spawned successfully.
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.191 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.213 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.219 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.244 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.245 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.245 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.246 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.246 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.247 189283 DEBUG nova.virt.libvirt.driver [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.282 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 19:53:14 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:14.296 106564 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 10 19:53:14 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:14.297 106564 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmptlhk915t/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 10 19:53:14 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:14.145 239384 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 10 19:53:14 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:14.149 239384 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 10 19:53:14 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:14.151 239384 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 10 19:53:14 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:14.151 239384 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239384
Dec 10 19:53:14 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:14.301 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e98fc78d-9c72-4ca3-bd20-6c682ed56815]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.325 189283 INFO nova.compute.manager [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Took 6.89 seconds to spawn the instance on the hypervisor.
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.326 189283 DEBUG nova.compute.manager [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.397 189283 INFO nova.compute.manager [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Took 7.30 seconds to build instance.
Dec 10 19:53:14 compute-0 nova_compute[189279]: 2025-12-10 19:53:14.421 189283 DEBUG oslo_concurrency.lockutils [None req-feee2853-d412-4fe2-85f6-a72f36450425 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:14 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:14.873 239384 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:14 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:14.874 239384 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:14 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:14.874 239384 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:15 compute-0 podman[239389]: 2025-12-10 19:53:15.098155575 +0000 UTC m=+0.072917657 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:53:15 compute-0 podman[239390]: 2025-12-10 19:53:15.098371021 +0000 UTC m=+0.072803964 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 19:53:15 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:15.479 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[7686bbe5-7f63-4a33-8381-8db78b08b013]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:15 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:15.481 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape55a1ff5-f1 in ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 10 19:53:15 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:15.484 239384 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape55a1ff5-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 10 19:53:15 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:15.484 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[138d0f43-f0fb-4ddb-9a07-c29a0670e982]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:15 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:15.487 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6a0b04-2654-40f5-b1a7-86e4e422e5a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:15 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:15.511 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[099dea3a-2791-466a-b4f4-500aff9564e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:15 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:15.539 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[cb393754-c013-487f-b22f-e95879ed978e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:15 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:15.543 106564 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpfqe26oy7/privsep.sock']
Dec 10 19:53:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:16.268 106564 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 10 19:53:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:16.269 106564 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpfqe26oy7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 10 19:53:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:16.116 239437 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 10 19:53:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:16.122 239437 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 10 19:53:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:16.125 239437 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 10 19:53:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:16.126 239437 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239437
Dec 10 19:53:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:16.273 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[ff214220-57d0-4b55-8564-7c52f2c8d630]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:16 compute-0 nova_compute[189279]: 2025-12-10 19:53:16.294 189283 DEBUG nova.compute.manager [req-5bc0643e-cd74-4a30-863e-9377f8ca7270 req-11c5858e-1de8-4020-88a4-7a83956dfc11 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Received event network-vif-plugged-20b76af1-42c6-4b7d-a834-c20e017b3e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:53:16 compute-0 nova_compute[189279]: 2025-12-10 19:53:16.295 189283 DEBUG oslo_concurrency.lockutils [req-5bc0643e-cd74-4a30-863e-9377f8ca7270 req-11c5858e-1de8-4020-88a4-7a83956dfc11 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "12986b74-7b15-4ff4-9019-081950660d4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:16 compute-0 nova_compute[189279]: 2025-12-10 19:53:16.296 189283 DEBUG oslo_concurrency.lockutils [req-5bc0643e-cd74-4a30-863e-9377f8ca7270 req-11c5858e-1de8-4020-88a4-7a83956dfc11 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:16 compute-0 nova_compute[189279]: 2025-12-10 19:53:16.297 189283 DEBUG oslo_concurrency.lockutils [req-5bc0643e-cd74-4a30-863e-9377f8ca7270 req-11c5858e-1de8-4020-88a4-7a83956dfc11 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:16 compute-0 nova_compute[189279]: 2025-12-10 19:53:16.297 189283 DEBUG nova.compute.manager [req-5bc0643e-cd74-4a30-863e-9377f8ca7270 req-11c5858e-1de8-4020-88a4-7a83956dfc11 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] No waiting events found dispatching network-vif-plugged-20b76af1-42c6-4b7d-a834-c20e017b3e8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 19:53:16 compute-0 nova_compute[189279]: 2025-12-10 19:53:16.298 189283 WARNING nova.compute.manager [req-5bc0643e-cd74-4a30-863e-9377f8ca7270 req-11c5858e-1de8-4020-88a4-7a83956dfc11 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Received unexpected event network-vif-plugged-20b76af1-42c6-4b7d-a834-c20e017b3e8d for instance with vm_state active and task_state None.
Dec 10 19:53:16 compute-0 nova_compute[189279]: 2025-12-10 19:53:16.299 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:16 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 10 19:53:16 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 10 19:53:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:16.836 239437 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:16.836 239437 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:16.836 239437 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:17 compute-0 nova_compute[189279]: 2025-12-10 19:53:17.405 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.532 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[ab349803-4f64-41ea-87ea-463a1676c4c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:17 compute-0 NetworkManager[56238]: <info>  [1765396397.5707] manager: (tape55a1ff5-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.568 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[64aa9474-f27a-4e20-80b7-ea9f9f4a48b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.604 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ee099d-5698-4981-b74d-1f071468ccda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.608 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[adb28ef1-36aa-4a41-9c3e-d36dd6afdb35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:17 compute-0 systemd-udevd[239476]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 19:53:17 compute-0 NetworkManager[56238]: <info>  [1765396397.6411] device (tape55a1ff5-f0): carrier: link connected
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.646 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[1115eab2-7776-4a23-966f-19be01f42979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.670 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[19fcd1d4-3c59-48f2-81c2-db85284cd698]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape55a1ff5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:f6:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 372629, 'reachable_time': 30346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239500, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.693 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[cbcb462b-7fe7-4837-adc0-c27fbbf14ad8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe00:f6e4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372629, 'tstamp': 372629}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239504, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.712 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e04a8fbe-948c-4b1b-8782-17600b2dbd56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape55a1ff5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:f6:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 372629, 'reachable_time': 30346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239506, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:17 compute-0 podman[239465]: 2025-12-10 19:53:17.73667827 +0000 UTC m=+0.143667473 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.763 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[086ea032-c8a4-4c1e-88d0-02adc7884a11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.841 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[2e462da2-886d-491e-b255-b0cba8dc4d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.844 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape55a1ff5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.845 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.845 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape55a1ff5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:53:17 compute-0 kernel: tape55a1ff5-f0: entered promiscuous mode
Dec 10 19:53:17 compute-0 nova_compute[189279]: 2025-12-10 19:53:17.848 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:17 compute-0 NetworkManager[56238]: <info>  [1765396397.8494] manager: (tape55a1ff5-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Dec 10 19:53:17 compute-0 nova_compute[189279]: 2025-12-10 19:53:17.853 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.855 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape55a1ff5-f0, col_values=(('external_ids', {'iface-id': 'f70c9140-d0bb-473b-94ef-0336fe52cbb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:53:17 compute-0 nova_compute[189279]: 2025-12-10 19:53:17.857 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:17 compute-0 ovn_controller[97701]: 2025-12-10T19:53:17Z|00031|binding|INFO|Releasing lport f70c9140-d0bb-473b-94ef-0336fe52cbb0 from this chassis (sb_readonly=0)
Dec 10 19:53:17 compute-0 nova_compute[189279]: 2025-12-10 19:53:17.899 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:17 compute-0 nova_compute[189279]: 2025-12-10 19:53:17.902 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.901 106564 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e55a1ff5-f742-4bad-ae9c-2f6d4795fa29.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e55a1ff5-f742-4bad-ae9c-2f6d4795fa29.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.903 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[71f8f209-d8fd-4066-bc00-7d2c2668c067]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.905 106564 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: global
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     log         /dev/log local0 debug
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     log-tag     haproxy-metadata-proxy-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     user        root
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     group       root
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     maxconn     1024
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     pidfile     /var/lib/neutron/external/pids/e55a1ff5-f742-4bad-ae9c-2f6d4795fa29.pid.haproxy
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     daemon
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: defaults
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     log global
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     mode http
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     option httplog
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     option dontlognull
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     option http-server-close
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     option forwardfor
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     retries                 3
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     timeout http-request    30s
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     timeout connect         30s
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     timeout client          32s
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     timeout server          32s
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     timeout http-keep-alive 30s
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: listen listener
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     bind 169.254.169.254:80
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     server metadata /var/lib/neutron/metadata_proxy
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:     http-request add-header X-OVN-Network-ID e55a1ff5-f742-4bad-ae9c-2f6d4795fa29
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 10 19:53:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:17.906 106564 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'env', 'PROCESS_TAG=haproxy-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e55a1ff5-f742-4bad-ae9c-2f6d4795fa29.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 10 19:53:18 compute-0 podman[239542]: 2025-12-10 19:53:18.299822815 +0000 UTC m=+0.058794435 container create 429887737e95730050e6273086338cd64918584fc0bd030f9cfd323ec88f0102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:53:18 compute-0 systemd[1]: Started libpod-conmon-429887737e95730050e6273086338cd64918584fc0bd030f9cfd323ec88f0102.scope.
Dec 10 19:53:18 compute-0 podman[239542]: 2025-12-10 19:53:18.268300429 +0000 UTC m=+0.027272049 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 10 19:53:18 compute-0 systemd[1]: Started libcrun container.
Dec 10 19:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc3fc5e82d551ae3a43d490a5d3025ef36a223c071f0909dfb60c0b008a606f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 10 19:53:18 compute-0 podman[239542]: 2025-12-10 19:53:18.402879817 +0000 UTC m=+0.161851457 container init 429887737e95730050e6273086338cd64918584fc0bd030f9cfd323ec88f0102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 10 19:53:18 compute-0 podman[239542]: 2025-12-10 19:53:18.410348486 +0000 UTC m=+0.169320106 container start 429887737e95730050e6273086338cd64918584fc0bd030f9cfd323ec88f0102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 10 19:53:18 compute-0 neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29[239556]: [NOTICE]   (239560) : New worker (239562) forked
Dec 10 19:53:18 compute-0 neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29[239556]: [NOTICE]   (239560) : Loading success.
Dec 10 19:53:21 compute-0 nova_compute[189279]: 2025-12-10 19:53:21.304 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:22 compute-0 nova_compute[189279]: 2025-12-10 19:53:22.411 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:23 compute-0 podman[239572]: 2025-12-10 19:53:23.084064552 +0000 UTC m=+0.069067201 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Dec 10 19:53:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:23.366 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:23.367 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:53:23.372 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:24 compute-0 NetworkManager[56238]: <info>  [1765396404.6588] manager: (patch-br-int-to-provnet-585b0ffa-caa8-4b2b-92b9-d0501e2b5999): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Dec 10 19:53:24 compute-0 NetworkManager[56238]: <info>  [1765396404.6594] device (patch-br-int-to-provnet-585b0ffa-caa8-4b2b-92b9-d0501e2b5999)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 19:53:24 compute-0 NetworkManager[56238]: <warn>  [1765396404.6598] device (patch-br-int-to-provnet-585b0ffa-caa8-4b2b-92b9-d0501e2b5999)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 10 19:53:24 compute-0 NetworkManager[56238]: <info>  [1765396404.6603] manager: (patch-provnet-585b0ffa-caa8-4b2b-92b9-d0501e2b5999-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Dec 10 19:53:24 compute-0 NetworkManager[56238]: <info>  [1765396404.6605] device (patch-provnet-585b0ffa-caa8-4b2b-92b9-d0501e2b5999-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 10 19:53:24 compute-0 NetworkManager[56238]: <warn>  [1765396404.6606] device (patch-provnet-585b0ffa-caa8-4b2b-92b9-d0501e2b5999-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 10 19:53:24 compute-0 NetworkManager[56238]: <info>  [1765396404.6612] manager: (patch-br-int-to-provnet-585b0ffa-caa8-4b2b-92b9-d0501e2b5999): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Dec 10 19:53:24 compute-0 nova_compute[189279]: 2025-12-10 19:53:24.660 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:24 compute-0 NetworkManager[56238]: <info>  [1765396404.6617] manager: (patch-provnet-585b0ffa-caa8-4b2b-92b9-d0501e2b5999-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Dec 10 19:53:24 compute-0 NetworkManager[56238]: <info>  [1765396404.6621] device (patch-br-int-to-provnet-585b0ffa-caa8-4b2b-92b9-d0501e2b5999)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 10 19:53:24 compute-0 NetworkManager[56238]: <info>  [1765396404.6623] device (patch-provnet-585b0ffa-caa8-4b2b-92b9-d0501e2b5999-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 10 19:53:24 compute-0 ovn_controller[97701]: 2025-12-10T19:53:24Z|00032|binding|INFO|Releasing lport f70c9140-d0bb-473b-94ef-0336fe52cbb0 from this chassis (sb_readonly=0)
Dec 10 19:53:24 compute-0 ovn_controller[97701]: 2025-12-10T19:53:24Z|00033|binding|INFO|Releasing lport f70c9140-d0bb-473b-94ef-0336fe52cbb0 from this chassis (sb_readonly=0)
Dec 10 19:53:24 compute-0 nova_compute[189279]: 2025-12-10 19:53:24.700 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:24 compute-0 nova_compute[189279]: 2025-12-10 19:53:24.709 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:25 compute-0 nova_compute[189279]: 2025-12-10 19:53:25.349 189283 DEBUG nova.compute.manager [req-553f55fb-79e4-470b-868d-5d41da00ef39 req-a08d1c54-6e4a-4371-991c-7bd911de079f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Received event network-changed-20b76af1-42c6-4b7d-a834-c20e017b3e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:53:25 compute-0 nova_compute[189279]: 2025-12-10 19:53:25.350 189283 DEBUG nova.compute.manager [req-553f55fb-79e4-470b-868d-5d41da00ef39 req-a08d1c54-6e4a-4371-991c-7bd911de079f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Refreshing instance network info cache due to event network-changed-20b76af1-42c6-4b7d-a834-c20e017b3e8d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 19:53:25 compute-0 nova_compute[189279]: 2025-12-10 19:53:25.350 189283 DEBUG oslo_concurrency.lockutils [req-553f55fb-79e4-470b-868d-5d41da00ef39 req-a08d1c54-6e4a-4371-991c-7bd911de079f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:53:25 compute-0 nova_compute[189279]: 2025-12-10 19:53:25.350 189283 DEBUG oslo_concurrency.lockutils [req-553f55fb-79e4-470b-868d-5d41da00ef39 req-a08d1c54-6e4a-4371-991c-7bd911de079f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:53:25 compute-0 nova_compute[189279]: 2025-12-10 19:53:25.351 189283 DEBUG nova.network.neutron [req-553f55fb-79e4-470b-868d-5d41da00ef39 req-a08d1c54-6e4a-4371-991c-7bd911de079f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Refreshing network info cache for port 20b76af1-42c6-4b7d-a834-c20e017b3e8d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 19:53:26 compute-0 nova_compute[189279]: 2025-12-10 19:53:26.307 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:26 compute-0 nova_compute[189279]: 2025-12-10 19:53:26.523 189283 DEBUG nova.network.neutron [req-553f55fb-79e4-470b-868d-5d41da00ef39 req-a08d1c54-6e4a-4371-991c-7bd911de079f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updated VIF entry in instance network info cache for port 20b76af1-42c6-4b7d-a834-c20e017b3e8d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 19:53:26 compute-0 nova_compute[189279]: 2025-12-10 19:53:26.523 189283 DEBUG nova.network.neutron [req-553f55fb-79e4-470b-868d-5d41da00ef39 req-a08d1c54-6e4a-4371-991c-7bd911de079f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updating instance_info_cache with network_info: [{"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:53:26 compute-0 nova_compute[189279]: 2025-12-10 19:53:26.545 189283 DEBUG oslo_concurrency.lockutils [req-553f55fb-79e4-470b-868d-5d41da00ef39 req-a08d1c54-6e4a-4371-991c-7bd911de079f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:53:27 compute-0 nova_compute[189279]: 2025-12-10 19:53:27.414 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:29 compute-0 podman[203484]: time="2025-12-10T19:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:53:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 19:53:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4763 "" "Go-http-client/1.1"
Dec 10 19:53:31 compute-0 podman[239592]: 2025-12-10 19:53:31.099162609 +0000 UTC m=+0.078531173 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9)
Dec 10 19:53:31 compute-0 nova_compute[189279]: 2025-12-10 19:53:31.309 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:31 compute-0 openstack_network_exporter[205632]: ERROR   19:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:53:31 compute-0 openstack_network_exporter[205632]: ERROR   19:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:53:31 compute-0 openstack_network_exporter[205632]: ERROR   19:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:53:31 compute-0 openstack_network_exporter[205632]: ERROR   19:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:53:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:53:31 compute-0 openstack_network_exporter[205632]: ERROR   19:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:53:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:53:32 compute-0 nova_compute[189279]: 2025-12-10 19:53:32.418 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:33 compute-0 podman[239612]: 2025-12-10 19:53:33.093730213 +0000 UTC m=+0.076489967 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 19:53:36 compute-0 nova_compute[189279]: 2025-12-10 19:53:36.312 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:37 compute-0 nova_compute[189279]: 2025-12-10 19:53:37.421 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:38 compute-0 nova_compute[189279]: 2025-12-10 19:53:38.948 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:53:39 compute-0 podman[239636]: 2025-12-10 19:53:39.095274139 +0000 UTC m=+0.079254562 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec 10 19:53:39 compute-0 podman[239638]: 2025-12-10 19:53:39.105988327 +0000 UTC m=+0.079677214 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release=1214.1726694543, release-0.7.12=)
Dec 10 19:53:39 compute-0 podman[239637]: 2025-12-10 19:53:39.130152078 +0000 UTC m=+0.109501313 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:53:40 compute-0 nova_compute[189279]: 2025-12-10 19:53:40.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:53:41 compute-0 nova_compute[189279]: 2025-12-10 19:53:41.315 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:41 compute-0 nova_compute[189279]: 2025-12-10 19:53:41.482 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:53:41 compute-0 nova_compute[189279]: 2025-12-10 19:53:41.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:53:41 compute-0 nova_compute[189279]: 2025-12-10 19:53:41.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:53:41 compute-0 nova_compute[189279]: 2025-12-10 19:53:41.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 19:53:42 compute-0 nova_compute[189279]: 2025-12-10 19:53:42.055 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:53:42 compute-0 nova_compute[189279]: 2025-12-10 19:53:42.056 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:53:42 compute-0 nova_compute[189279]: 2025-12-10 19:53:42.056 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 19:53:42 compute-0 nova_compute[189279]: 2025-12-10 19:53:42.056 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12986b74-7b15-4ff4-9019-081950660d4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.171 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.171 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.172 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa156cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.178 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 12986b74-7b15-4ff4-9019-081950660d4b from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 10 19:53:42 compute-0 nova_compute[189279]: 2025-12-10 19:53:42.425 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:42.514 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/12986b74-7b15-4ff4-9019-081950660d4b -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a6b7809c80638f6e016296d2f243706fded356213cefb5a3f70c31b120afa2c9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.284 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Wed, 10 Dec 2025 19:53:42 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-8bb038f6-c12c-4376-9635-e62b201fb5c4 x-openstack-request-id: req-8bb038f6-c12c-4376-9635-e62b201fb5c4 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.284 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "12986b74-7b15-4ff4-9019-081950660d4b", "name": "test_0", "status": "ACTIVE", "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "user_id": "2143e69e49fd49db99c8737c973c1ea5", "metadata": {}, "hostId": "dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852", "image": {"id": "06e6231d-0a77-4b09-acb3-e7faf5a777be", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/06e6231d-0a77-4b09-acb3-e7faf5a777be"}]}, "flavor": {"id": "0fc2e5b1-b522-4c52-bdef-97db09e458e4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/0fc2e5b1-b522-4c52-bdef-97db09e458e4"}]}, "created": "2025-12-10T19:53:05Z", "updated": "2025-12-10T19:53:14Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.139", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:96:2e:35"}, {"version": 4, "addr": "192.168.122.175", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:96:2e:35"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/12986b74-7b15-4ff4-9019-081950660d4b"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/12986b74-7b15-4ff4-9019-081950660d4b"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-10T19:53:14.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.284 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/12986b74-7b15-4ff4-9019-081950660d4b used request id req-8bb038f6-c12c-4376-9635-e62b201fb5c4 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.287 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '12986b74-7b15-4ff4-9019-081950660d4b', 'name': 'test_0', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.287 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.287 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.287 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.288 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.290 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T19:53:43.287958) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.291 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.292 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.292 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.292 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T19:53:43.292183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.321 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.322 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.322 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.323 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.323 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.323 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.324 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T19:53:43.323757) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T19:53:43.324637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.330 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 12986b74-7b15-4ff4-9019-081950660d4b / tap20b76af1-42 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.330 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.331 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.331 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.332 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.332 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T19:53:43.331690) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.333 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.333 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T19:53:43.333267) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.334 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.334 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.334 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T19:53:43.334729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.335 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T19:53:43.336171) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.336 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.337 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T19:53:43.337526) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.365 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.365 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 12986b74-7b15-4ff4-9019-081950660d4b: ceilometer.compute.pollsters.NoVolumeException
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.366 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.366 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-10T19:53:43.366398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.366 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.368 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.368 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.368 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.368 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.368 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.369 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T19:53:43.368509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.369 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.369 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.369 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.370 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.370 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T19:53:43.369975) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.370 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.371 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.371 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.371 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.371 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.372 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T19:53:43.372033) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.372 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.372 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.373 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.373 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.373 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.373 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.373 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T19:53:43.373382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.373 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.374 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.374 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.375 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T19:53:43.374854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.376 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T19:53:43.376280) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.440 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.441 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.441 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.442 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.442 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.442 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.442 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.442 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.442 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T19:53:43.442783) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.442 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/cpu volume: 28710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.443 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.443 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.443 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.444 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.444 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.444 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.444 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T19:53:43.444181) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.444 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 327030457 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.444 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.445 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 1008128 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.445 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.445 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.445 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.445 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.445 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.445 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T19:53:43.445854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.445 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.446 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.446 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.446 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.447 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.447 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.447 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.447 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.447 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.447 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T19:53:43.447325) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.447 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.448 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.448 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.448 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.448 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.448 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.448 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.448 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.449 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.449 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.449 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.449 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.450 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.450 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.450 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T19:53:43.448986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.450 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.451 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.451 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.451 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.451 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.451 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.452 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.452 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.452 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.452 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.452 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T19:53:43.451351) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.452 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T19:53:43.452435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.452 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.453 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.453 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.453 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.453 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.453 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.454 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.454 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.454 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.454 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.454 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.455 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.455 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.455 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.455 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.455 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T19:53:43.454095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.455 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.456 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T19:53:43.455787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.456 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.456 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.457 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.457 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.457 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-10T19:53:43.456975) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.459 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.459 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.459 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:53:43.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:53:43 compute-0 nova_compute[189279]: 2025-12-10 19:53:43.732 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updating instance_info_cache with network_info: [{"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:53:43 compute-0 nova_compute[189279]: 2025-12-10 19:53:43.755 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:53:43 compute-0 nova_compute[189279]: 2025-12-10 19:53:43.756 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 19:53:43 compute-0 nova_compute[189279]: 2025-12-10 19:53:43.757 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:53:43 compute-0 nova_compute[189279]: 2025-12-10 19:53:43.757 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:53:43 compute-0 nova_compute[189279]: 2025-12-10 19:53:43.758 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:53:43 compute-0 nova_compute[189279]: 2025-12-10 19:53:43.758 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:53:44 compute-0 nova_compute[189279]: 2025-12-10 19:53:44.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:53:45 compute-0 nova_compute[189279]: 2025-12-10 19:53:45.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:53:45 compute-0 nova_compute[189279]: 2025-12-10 19:53:45.514 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:45 compute-0 nova_compute[189279]: 2025-12-10 19:53:45.515 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:45 compute-0 nova_compute[189279]: 2025-12-10 19:53:45.515 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:45 compute-0 nova_compute[189279]: 2025-12-10 19:53:45.516 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:53:45 compute-0 nova_compute[189279]: 2025-12-10 19:53:45.621 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:45 compute-0 podman[239694]: 2025-12-10 19:53:45.664389224 +0000 UTC m=+0.096373839 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Dec 10 19:53:45 compute-0 podman[239696]: 2025-12-10 19:53:45.672210681 +0000 UTC m=+0.102172469 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 19:53:45 compute-0 nova_compute[189279]: 2025-12-10 19:53:45.706 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:45 compute-0 nova_compute[189279]: 2025-12-10 19:53:45.707 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:45 compute-0 nova_compute[189279]: 2025-12-10 19:53:45.768 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:45 compute-0 nova_compute[189279]: 2025-12-10 19:53:45.769 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:45 compute-0 nova_compute[189279]: 2025-12-10 19:53:45.848 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:45 compute-0 nova_compute[189279]: 2025-12-10 19:53:45.850 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:53:45 compute-0 nova_compute[189279]: 2025-12-10 19:53:45.910 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.212 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.216 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5319MB free_disk=72.39587020874023GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.216 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.217 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.293 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.293 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.294 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.314 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.349 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating inventory in ProviderTree for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.378 189283 ERROR nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [req-ab6ffff6-312d-4131-850c-75077c7c3426] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID fc709657-cb59-4c0b-8f54-5be8a24ab091.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-ab6ffff6-312d-4131-850c-75077c7c3426"}]}
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.392 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing inventories for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.418 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating ProviderTree inventory for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.419 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating inventory in ProviderTree for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.441 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing aggregate associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.484 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing trait associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, traits: COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.524 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating inventory in ProviderTree for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.562 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updated inventory for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.563 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.563 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating inventory in ProviderTree for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.596 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:53:46 compute-0 nova_compute[189279]: 2025-12-10 19:53:46.597 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.380s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:53:47 compute-0 nova_compute[189279]: 2025-12-10 19:53:47.428 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:48 compute-0 podman[239771]: 2025-12-10 19:53:48.157752496 +0000 UTC m=+0.144871247 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 10 19:53:48 compute-0 ovn_controller[97701]: 2025-12-10T19:53:48Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:2e:35 192.168.0.139
Dec 10 19:53:48 compute-0 ovn_controller[97701]: 2025-12-10T19:53:48Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:2e:35 192.168.0.139
Dec 10 19:53:51 compute-0 nova_compute[189279]: 2025-12-10 19:53:51.317 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:52 compute-0 nova_compute[189279]: 2025-12-10 19:53:52.432 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:54 compute-0 podman[239797]: 2025-12-10 19:53:54.096642931 +0000 UTC m=+0.076470955 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251210, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 10 19:53:54 compute-0 ovn_controller[97701]: 2025-12-10T19:53:54Z|00034|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec 10 19:53:56 compute-0 nova_compute[189279]: 2025-12-10 19:53:56.320 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:57 compute-0 nova_compute[189279]: 2025-12-10 19:53:57.435 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:53:59 compute-0 podman[203484]: time="2025-12-10T19:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:53:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 19:53:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4786 "" "Go-http-client/1.1"
Dec 10 19:54:01 compute-0 nova_compute[189279]: 2025-12-10 19:54:01.323 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:01 compute-0 openstack_network_exporter[205632]: ERROR   19:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:54:01 compute-0 openstack_network_exporter[205632]: ERROR   19:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:54:01 compute-0 openstack_network_exporter[205632]: ERROR   19:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:54:01 compute-0 openstack_network_exporter[205632]: ERROR   19:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:54:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:54:01 compute-0 openstack_network_exporter[205632]: ERROR   19:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:54:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:54:02 compute-0 podman[239816]: 2025-12-10 19:54:02.105418984 +0000 UTC m=+0.090664909 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vendor=Red Hat, Inc.)
Dec 10 19:54:02 compute-0 nova_compute[189279]: 2025-12-10 19:54:02.439 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:04 compute-0 podman[239838]: 2025-12-10 19:54:04.110047997 +0000 UTC m=+0.083944403 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:54:06 compute-0 nova_compute[189279]: 2025-12-10 19:54:06.328 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:07 compute-0 nova_compute[189279]: 2025-12-10 19:54:07.443 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:07 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:07.681 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 19:54:07 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:07.682 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 19:54:07 compute-0 nova_compute[189279]: 2025-12-10 19:54:07.682 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:10 compute-0 podman[239862]: 2025-12-10 19:54:10.077885135 +0000 UTC m=+0.060640325 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 10 19:54:10 compute-0 podman[239863]: 2025-12-10 19:54:10.093276073 +0000 UTC m=+0.071358913 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3)
Dec 10 19:54:10 compute-0 podman[239864]: 2025-12-10 19:54:10.107207 +0000 UTC m=+0.076816545 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.4, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543)
Dec 10 19:54:11 compute-0 nova_compute[189279]: 2025-12-10 19:54:11.330 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:12 compute-0 nova_compute[189279]: 2025-12-10 19:54:12.446 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.450 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "ac2c8050-72b5-419c-ba99-c4feeb26147a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.451 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.470 189283 DEBUG nova.compute.manager [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.537 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.538 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.546 189283 DEBUG nova.virt.hardware [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.547 189283 INFO nova.compute.claims [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Claim successful on node compute-0.ctlplane.example.com
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.657 189283 DEBUG nova.compute.provider_tree [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.673 189283 DEBUG nova.scheduler.client.report [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.691 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.692 189283 DEBUG nova.compute.manager [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.732 189283 DEBUG nova.compute.manager [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.732 189283 DEBUG nova.network.neutron [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.750 189283 INFO nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.787 189283 DEBUG nova.compute.manager [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.862 189283 DEBUG nova.compute.manager [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.863 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.864 189283 INFO nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Creating image(s)
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.864 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "/var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.865 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.866 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.877 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.961 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.963 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "193edf3941027c090c206b4992bbea3ae5563eb9" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.963 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "193edf3941027c090c206b4992bbea3ae5563eb9" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:54:13 compute-0 nova_compute[189279]: 2025-12-10 19:54:13.974 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.034 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.036 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9,backing_fmt=raw /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.088 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9,backing_fmt=raw /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk 1073741824" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.089 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "193edf3941027c090c206b4992bbea3ae5563eb9" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.090 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.164 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.165 189283 DEBUG nova.virt.disk.api [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Checking if we can resize image /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.166 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.224 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.225 189283 DEBUG nova.virt.disk.api [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Cannot resize image /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.226 189283 DEBUG nova.objects.instance [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'migration_context' on Instance uuid ac2c8050-72b5-419c-ba99-c4feeb26147a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.242 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "/var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.243 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.244 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.257 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.333 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.335 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.336 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.349 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.407 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.408 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.455 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.457 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.457 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.518 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.520 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.520 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Ensure instance console log exists: /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.521 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.521 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:54:14 compute-0 nova_compute[189279]: 2025-12-10 19:54:14.521 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:54:15 compute-0 nova_compute[189279]: 2025-12-10 19:54:15.290 189283 DEBUG nova.network.neutron [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Successfully updated port: 5d3f5317-707c-4080-a612-71018c7ba2ed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 19:54:15 compute-0 nova_compute[189279]: 2025-12-10 19:54:15.310 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:54:15 compute-0 nova_compute[189279]: 2025-12-10 19:54:15.311 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquired lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:54:15 compute-0 nova_compute[189279]: 2025-12-10 19:54:15.311 189283 DEBUG nova.network.neutron [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 19:54:15 compute-0 nova_compute[189279]: 2025-12-10 19:54:15.384 189283 DEBUG nova.compute.manager [req-74a26401-3e42-4550-a7dc-0cffe707b879 req-0112e7d6-6214-442f-a0da-e8740c8a93a2 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Received event network-changed-5d3f5317-707c-4080-a612-71018c7ba2ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:54:15 compute-0 nova_compute[189279]: 2025-12-10 19:54:15.385 189283 DEBUG nova.compute.manager [req-74a26401-3e42-4550-a7dc-0cffe707b879 req-0112e7d6-6214-442f-a0da-e8740c8a93a2 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Refreshing instance network info cache due to event network-changed-5d3f5317-707c-4080-a612-71018c7ba2ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 19:54:15 compute-0 nova_compute[189279]: 2025-12-10 19:54:15.386 189283 DEBUG oslo_concurrency.lockutils [req-74a26401-3e42-4550-a7dc-0cffe707b879 req-0112e7d6-6214-442f-a0da-e8740c8a93a2 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:54:15 compute-0 nova_compute[189279]: 2025-12-10 19:54:15.435 189283 DEBUG nova.network.neutron [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 19:54:15 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:15.685 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:54:16 compute-0 podman[239943]: 2025-12-10 19:54:16.093314119 +0000 UTC m=+0.080012756 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:54:16 compute-0 podman[239944]: 2025-12-10 19:54:16.127092303 +0000 UTC m=+0.094224030 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.216 189283 DEBUG nova.network.neutron [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Updating instance_info_cache with network_info: [{"id": "5d3f5317-707c-4080-a612-71018c7ba2ed", "address": "fa:16:3e:af:37:97", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d3f5317-70", "ovs_interfaceid": "5d3f5317-707c-4080-a612-71018c7ba2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.269 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Releasing lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.269 189283 DEBUG nova.compute.manager [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Instance network_info: |[{"id": "5d3f5317-707c-4080-a612-71018c7ba2ed", "address": "fa:16:3e:af:37:97", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d3f5317-70", "ovs_interfaceid": "5d3f5317-707c-4080-a612-71018c7ba2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.270 189283 DEBUG oslo_concurrency.lockutils [req-74a26401-3e42-4550-a7dc-0cffe707b879 req-0112e7d6-6214-442f-a0da-e8740c8a93a2 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.270 189283 DEBUG nova.network.neutron [req-74a26401-3e42-4550-a7dc-0cffe707b879 req-0112e7d6-6214-442f-a0da-e8740c8a93a2 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Refreshing network info cache for port 5d3f5317-707c-4080-a612-71018c7ba2ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.273 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Start _get_guest_xml network_info=[{"id": "5d3f5317-707c-4080-a612-71018c7ba2ed", "address": "fa:16:3e:af:37:97", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d3f5317-70", "ovs_interfaceid": "5d3f5317-707c-4080-a612-71018c7ba2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-10T19:52:04Z,direct_url=<?>,disk_format='qcow2',id=06e6231d-0a77-4b09-acb3-e7faf5a777be,min_disk=0,min_ram=0,name='cirros',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-10T19:52:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 1, 'encryption_options': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.283 189283 WARNING nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.294 189283 DEBUG nova.virt.libvirt.host [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.295 189283 DEBUG nova.virt.libvirt.host [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.302 189283 DEBUG nova.virt.libvirt.host [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.302 189283 DEBUG nova.virt.libvirt.host [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.303 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.303 189283 DEBUG nova.virt.hardware [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T19:52:09Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='0fc2e5b1-b522-4c52-bdef-97db09e458e4',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-10T19:52:04Z,direct_url=<?>,disk_format='qcow2',id=06e6231d-0a77-4b09-acb3-e7faf5a777be,min_disk=0,min_ram=0,name='cirros',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-10T19:52:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.304 189283 DEBUG nova.virt.hardware [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.304 189283 DEBUG nova.virt.hardware [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.304 189283 DEBUG nova.virt.hardware [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.305 189283 DEBUG nova.virt.hardware [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.305 189283 DEBUG nova.virt.hardware [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.305 189283 DEBUG nova.virt.hardware [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.306 189283 DEBUG nova.virt.hardware [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.306 189283 DEBUG nova.virt.hardware [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.306 189283 DEBUG nova.virt.hardware [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.307 189283 DEBUG nova.virt.hardware [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.311 189283 DEBUG nova.virt.libvirt.vif [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T19:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf',id=2,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='9d7a68be-d216-4b06-b611-878d356c6d68'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-fjjhmsat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T19:54:13Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zODE1ODg1MTU4NTIxMDUyNzUzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM4MTU4ODUxNTg1MjEwNTI3NTM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzgxNTg4NTE1ODUyMTA1Mjc1Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM4MTU4ODUxNTg1MjEwNTI3NTM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zODE1ODg1MTU4NTIxMDUyNzUzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zODE1ODg1MTU4NTIxMDUyNzUzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Dec 10 19:54:16 compute-0 nova_compute[189279]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzgxNTg4NTE1ODUyMTA1Mjc1Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM4MTU4ODUxNTg1MjEwNTI3NTM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zODE1ODg1MTU4NTIxMDUyNzUzPT0tLQo=',user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=ac2c8050-72b5-419c-ba99-c4feeb26147a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d3f5317-707c-4080-a612-71018c7ba2ed", "address": "fa:16:3e:af:37:97", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d3f5317-70", "ovs_interfaceid": "5d3f5317-707c-4080-a612-71018c7ba2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.311 189283 DEBUG nova.network.os_vif_util [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "5d3f5317-707c-4080-a612-71018c7ba2ed", "address": "fa:16:3e:af:37:97", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d3f5317-70", "ovs_interfaceid": "5d3f5317-707c-4080-a612-71018c7ba2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.312 189283 DEBUG nova.network.os_vif_util [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:37:97,bridge_name='br-int',has_traffic_filtering=True,id=5d3f5317-707c-4080-a612-71018c7ba2ed,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5d3f5317-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.313 189283 DEBUG nova.objects.instance [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'pci_devices' on Instance uuid ac2c8050-72b5-419c-ba99-c4feeb26147a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.334 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.341 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] End _get_guest_xml xml=<domain type="kvm">
Dec 10 19:54:16 compute-0 nova_compute[189279]:   <uuid>ac2c8050-72b5-419c-ba99-c4feeb26147a</uuid>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   <name>instance-00000002</name>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   <memory>524288</memory>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   <metadata>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <nova:name>vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf</nova:name>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 19:54:16</nova:creationTime>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <nova:flavor name="m1.small">
Dec 10 19:54:16 compute-0 nova_compute[189279]:         <nova:memory>512</nova:memory>
Dec 10 19:54:16 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 19:54:16 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 19:54:16 compute-0 nova_compute[189279]:         <nova:ephemeral>1</nova:ephemeral>
Dec 10 19:54:16 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 19:54:16 compute-0 nova_compute[189279]:         <nova:user uuid="2143e69e49fd49db99c8737c973c1ea5">admin</nova:user>
Dec 10 19:54:16 compute-0 nova_compute[189279]:         <nova:project uuid="fe518ea62a94467e823b2b1046c57a2e">admin</nova:project>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="06e6231d-0a77-4b09-acb3-e7faf5a777be"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 19:54:16 compute-0 nova_compute[189279]:         <nova:port uuid="5d3f5317-707c-4080-a612-71018c7ba2ed">
Dec 10 19:54:16 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="192.168.0.123" ipVersion="4"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   </metadata>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <system>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <entry name="serial">ac2c8050-72b5-419c-ba99-c4feeb26147a</entry>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <entry name="uuid">ac2c8050-72b5-419c-ba99-c4feeb26147a</entry>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     </system>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   <os>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   </os>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   <features>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <apic/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   </features>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   </clock>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   </cpu>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   <devices>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <target dev="vdb" bus="virtio"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.config"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:af:37:97"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <target dev="tap5d3f5317-70"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     </interface>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/console.log" append="off"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     </serial>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <video>
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     </video>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     </rng>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 19:54:16 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 19:54:16 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 19:54:16 compute-0 nova_compute[189279]:   </devices>
Dec 10 19:54:16 compute-0 nova_compute[189279]: </domain>
Dec 10 19:54:16 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.343 189283 DEBUG nova.compute.manager [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Preparing to wait for external event network-vif-plugged-5d3f5317-707c-4080-a612-71018c7ba2ed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.343 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.344 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.344 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.345 189283 DEBUG nova.virt.libvirt.vif [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T19:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf',id=2,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='9d7a68be-d216-4b06-b611-878d356c6d68'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-fjjhmsat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T19:54:13Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zODE1ODg1MTU4NTIxMDUyNzUzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM4MTU4ODUxNTg1MjEwNTI3NTM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzgxNTg4NTE1ODUyMTA1Mjc1Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM4MTU4ODUxNTg1MjEwNTI3NTM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zODE1ODg1MTU4NTIxMDUyNzUzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zODE1ODg1MTU4NTIxMDUyNzUzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Dec 10 19:54:16 compute-0 nova_compute[189279]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzgxNTg4NTE1ODUyMTA1Mjc1Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM4MTU4ODUxNTg1MjEwNTI3NTM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zODE1ODg1MTU4NTIxMDUyNzUzPT0tLQo=',user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=ac2c8050-72b5-419c-ba99-c4feeb26147a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d3f5317-707c-4080-a612-71018c7ba2ed", "address": "fa:16:3e:af:37:97", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d3f5317-70", "ovs_interfaceid": "5d3f5317-707c-4080-a612-71018c7ba2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.345 189283 DEBUG nova.network.os_vif_util [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "5d3f5317-707c-4080-a612-71018c7ba2ed", "address": "fa:16:3e:af:37:97", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d3f5317-70", "ovs_interfaceid": "5d3f5317-707c-4080-a612-71018c7ba2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.346 189283 DEBUG nova.network.os_vif_util [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:37:97,bridge_name='br-int',has_traffic_filtering=True,id=5d3f5317-707c-4080-a612-71018c7ba2ed,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5d3f5317-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.346 189283 DEBUG os_vif [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:37:97,bridge_name='br-int',has_traffic_filtering=True,id=5d3f5317-707c-4080-a612-71018c7ba2ed,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5d3f5317-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.347 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.347 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.347 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.350 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.351 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d3f5317-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.351 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5d3f5317-70, col_values=(('external_ids', {'iface-id': '5d3f5317-707c-4080-a612-71018c7ba2ed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:37:97', 'vm-uuid': 'ac2c8050-72b5-419c-ba99-c4feeb26147a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.353 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:16 compute-0 NetworkManager[56238]: <info>  [1765396456.3563] manager: (tap5d3f5317-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.356 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 19:54:16 compute-0 rsyslogd[236537]: message too long (8192) with configured size 8096, begin of message is: 2025-12-10 19:54:16.311 189283 DEBUG nova.virt.libvirt.vif [None req-f194a873-1d [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.365 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.366 189283 INFO os_vif [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:37:97,bridge_name='br-int',has_traffic_filtering=True,id=5d3f5317-707c-4080-a612-71018c7ba2ed,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5d3f5317-70')
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.423 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.423 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.424 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.424 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No VIF found with MAC fa:16:3e:af:37:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.425 189283 INFO nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Using config drive
Dec 10 19:54:16 compute-0 rsyslogd[236537]: message too long (8192) with configured size 8096, begin of message is: 2025-12-10 19:54:16.345 189283 DEBUG nova.virt.libvirt.vif [None req-f194a873-1d [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.724 189283 INFO nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Creating config drive at /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.config
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.746 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr00jq88m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.874 189283 DEBUG oslo_concurrency.processutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr00jq88m" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:16 compute-0 NetworkManager[56238]: <info>  [1765396456.9574] manager: (tap5d3f5317-70): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Dec 10 19:54:16 compute-0 kernel: tap5d3f5317-70: entered promiscuous mode
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.962 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:16 compute-0 ovn_controller[97701]: 2025-12-10T19:54:16Z|00035|binding|INFO|Claiming lport 5d3f5317-707c-4080-a612-71018c7ba2ed for this chassis.
Dec 10 19:54:16 compute-0 ovn_controller[97701]: 2025-12-10T19:54:16Z|00036|binding|INFO|5d3f5317-707c-4080-a612-71018c7ba2ed: Claiming fa:16:3e:af:37:97 192.168.0.123
Dec 10 19:54:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:16.971 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:37:97 192.168.0.123'], port_security=['fa:16:3e:af:37:97 192.168.0.123'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-pjemjxzxegr5-w43iflqhcsjr-gtk4633myb43-port-tdaih7wc5ctt', 'neutron:cidrs': '192.168.0.123/24', 'neutron:device_id': 'ac2c8050-72b5-419c-ba99-c4feeb26147a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-pjemjxzxegr5-w43iflqhcsjr-gtk4633myb43-port-tdaih7wc5ctt', 'neutron:project_id': 'fe518ea62a94467e823b2b1046c57a2e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dbfca0ae-785e-46d9-973d-467188fc7f6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a76816-4d6e-46b1-bfa0-919a54b6e056, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=5d3f5317-707c-4080-a612-71018c7ba2ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 19:54:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:16.973 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 5d3f5317-707c-4080-a612-71018c7ba2ed in datapath e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 bound to our chassis
Dec 10 19:54:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:16.975 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e55a1ff5-f742-4bad-ae9c-2f6d4795fa29
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.981 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:16 compute-0 ovn_controller[97701]: 2025-12-10T19:54:16Z|00037|binding|INFO|Setting lport 5d3f5317-707c-4080-a612-71018c7ba2ed ovn-installed in OVS
Dec 10 19:54:16 compute-0 ovn_controller[97701]: 2025-12-10T19:54:16Z|00038|binding|INFO|Setting lport 5d3f5317-707c-4080-a612-71018c7ba2ed up in Southbound
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.983 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:16 compute-0 nova_compute[189279]: 2025-12-10 19:54:16.984 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:16.995 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6b1658-a800-42de-b214-eadb1473fa8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:54:17 compute-0 systemd-udevd[240008]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 19:54:17 compute-0 NetworkManager[56238]: <info>  [1765396457.0245] device (tap5d3f5317-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 19:54:17 compute-0 systemd-machined[155642]: New machine qemu-2-instance-00000002.
Dec 10 19:54:17 compute-0 NetworkManager[56238]: <info>  [1765396457.0250] device (tap5d3f5317-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 19:54:17 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec 10 19:54:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:17.040 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2c5a8f-55d4-4f4c-8844-6b0260197fc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:54:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:17.044 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[50972937-c580-4b62-bd71-7a291199c9ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:54:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:17.081 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[a1106729-d527-41ba-ba48-c0d3954facc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:54:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:17.106 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[08aeccbd-e805-417b-821f-9c6aad1f42b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape55a1ff5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:f6:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 372629, 'reachable_time': 43696, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240017, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:54:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:17.128 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[79bae4d9-3784-4aa1-ade1-8925aaef9d8d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372645, 'tstamp': 372645}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240021, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372649, 'tstamp': 372649}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240021, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:54:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:17.129 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape55a1ff5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.131 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.133 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:17.134 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape55a1ff5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:54:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:17.135 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 19:54:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:17.136 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape55a1ff5-f0, col_values=(('external_ids', {'iface-id': 'f70c9140-d0bb-473b-94ef-0336fe52cbb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:54:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:17.136 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.233 189283 DEBUG nova.compute.manager [req-757bb132-f42a-4168-9d00-26576f0f10a6 req-10c0816f-0774-4ef6-8464-38217cbeba52 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Received event network-vif-plugged-5d3f5317-707c-4080-a612-71018c7ba2ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.233 189283 DEBUG oslo_concurrency.lockutils [req-757bb132-f42a-4168-9d00-26576f0f10a6 req-10c0816f-0774-4ef6-8464-38217cbeba52 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.234 189283 DEBUG oslo_concurrency.lockutils [req-757bb132-f42a-4168-9d00-26576f0f10a6 req-10c0816f-0774-4ef6-8464-38217cbeba52 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.234 189283 DEBUG oslo_concurrency.lockutils [req-757bb132-f42a-4168-9d00-26576f0f10a6 req-10c0816f-0774-4ef6-8464-38217cbeba52 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.234 189283 DEBUG nova.compute.manager [req-757bb132-f42a-4168-9d00-26576f0f10a6 req-10c0816f-0774-4ef6-8464-38217cbeba52 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Processing event network-vif-plugged-5d3f5317-707c-4080-a612-71018c7ba2ed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.410 189283 DEBUG nova.network.neutron [req-74a26401-3e42-4550-a7dc-0cffe707b879 req-0112e7d6-6214-442f-a0da-e8740c8a93a2 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Updated VIF entry in instance network info cache for port 5d3f5317-707c-4080-a612-71018c7ba2ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.411 189283 DEBUG nova.network.neutron [req-74a26401-3e42-4550-a7dc-0cffe707b879 req-0112e7d6-6214-442f-a0da-e8740c8a93a2 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Updating instance_info_cache with network_info: [{"id": "5d3f5317-707c-4080-a612-71018c7ba2ed", "address": "fa:16:3e:af:37:97", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d3f5317-70", "ovs_interfaceid": "5d3f5317-707c-4080-a612-71018c7ba2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.436 189283 DEBUG oslo_concurrency.lockutils [req-74a26401-3e42-4550-a7dc-0cffe707b879 req-0112e7d6-6214-442f-a0da-e8740c8a93a2 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.723 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396457.7225285, ac2c8050-72b5-419c-ba99-c4feeb26147a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.723 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] VM Started (Lifecycle Event)
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.725 189283 DEBUG nova.compute.manager [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.729 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.734 189283 INFO nova.virt.libvirt.driver [-] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Instance spawned successfully.
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.734 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.778 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.785 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.789 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.789 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.790 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.790 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.791 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.791 189283 DEBUG nova.virt.libvirt.driver [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.835 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.835 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396457.7226913, ac2c8050-72b5-419c-ba99-c4feeb26147a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.836 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] VM Paused (Lifecycle Event)
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.860 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.867 189283 INFO nova.compute.manager [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Took 4.00 seconds to spawn the instance on the hypervisor.
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.867 189283 DEBUG nova.compute.manager [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.869 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396457.7285662, ac2c8050-72b5-419c-ba99-c4feeb26147a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.869 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] VM Resumed (Lifecycle Event)
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.901 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.906 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.936 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.949 189283 INFO nova.compute.manager [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Took 4.44 seconds to build instance.
Dec 10 19:54:17 compute-0 nova_compute[189279]: 2025-12-10 19:54:17.971 189283 DEBUG oslo_concurrency.lockutils [None req-f194a873-1d37-4076-bf55-4b4ce0967fea 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:54:19 compute-0 podman[240030]: 2025-12-10 19:54:19.158457497 +0000 UTC m=+0.130979025 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 10 19:54:19 compute-0 nova_compute[189279]: 2025-12-10 19:54:19.329 189283 DEBUG nova.compute.manager [req-af7b8e53-1849-4b4d-a02a-b92cdbaa96af req-e42952fc-e195-473b-8f09-f45802d1a044 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Received event network-vif-plugged-5d3f5317-707c-4080-a612-71018c7ba2ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:54:19 compute-0 nova_compute[189279]: 2025-12-10 19:54:19.330 189283 DEBUG oslo_concurrency.lockutils [req-af7b8e53-1849-4b4d-a02a-b92cdbaa96af req-e42952fc-e195-473b-8f09-f45802d1a044 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:54:19 compute-0 nova_compute[189279]: 2025-12-10 19:54:19.331 189283 DEBUG oslo_concurrency.lockutils [req-af7b8e53-1849-4b4d-a02a-b92cdbaa96af req-e42952fc-e195-473b-8f09-f45802d1a044 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:54:19 compute-0 nova_compute[189279]: 2025-12-10 19:54:19.332 189283 DEBUG oslo_concurrency.lockutils [req-af7b8e53-1849-4b4d-a02a-b92cdbaa96af req-e42952fc-e195-473b-8f09-f45802d1a044 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:54:19 compute-0 nova_compute[189279]: 2025-12-10 19:54:19.333 189283 DEBUG nova.compute.manager [req-af7b8e53-1849-4b4d-a02a-b92cdbaa96af req-e42952fc-e195-473b-8f09-f45802d1a044 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] No waiting events found dispatching network-vif-plugged-5d3f5317-707c-4080-a612-71018c7ba2ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 19:54:19 compute-0 nova_compute[189279]: 2025-12-10 19:54:19.333 189283 WARNING nova.compute.manager [req-af7b8e53-1849-4b4d-a02a-b92cdbaa96af req-e42952fc-e195-473b-8f09-f45802d1a044 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Received unexpected event network-vif-plugged-5d3f5317-707c-4080-a612-71018c7ba2ed for instance with vm_state active and task_state None.
Dec 10 19:54:21 compute-0 nova_compute[189279]: 2025-12-10 19:54:21.338 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:21 compute-0 nova_compute[189279]: 2025-12-10 19:54:21.355 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:23.367 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:54:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:23.369 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:54:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:54:23.370 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:54:25 compute-0 podman[240060]: 2025-12-10 19:54:25.141375495 +0000 UTC m=+0.123580684 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute)
Dec 10 19:54:26 compute-0 nova_compute[189279]: 2025-12-10 19:54:26.341 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:26 compute-0 nova_compute[189279]: 2025-12-10 19:54:26.358 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:29 compute-0 podman[203484]: time="2025-12-10T19:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:54:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 19:54:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4788 "" "Go-http-client/1.1"
Dec 10 19:54:31 compute-0 nova_compute[189279]: 2025-12-10 19:54:31.344 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:31 compute-0 nova_compute[189279]: 2025-12-10 19:54:31.361 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:31 compute-0 openstack_network_exporter[205632]: ERROR   19:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:54:31 compute-0 openstack_network_exporter[205632]: ERROR   19:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:54:31 compute-0 openstack_network_exporter[205632]: ERROR   19:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:54:31 compute-0 openstack_network_exporter[205632]: ERROR   19:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:54:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:54:31 compute-0 openstack_network_exporter[205632]: ERROR   19:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:54:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:54:33 compute-0 podman[240079]: 2025-12-10 19:54:33.160223812 +0000 UTC m=+0.121233521 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 10 19:54:35 compute-0 podman[240099]: 2025-12-10 19:54:35.105144549 +0000 UTC m=+0.086872141 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:54:35 compute-0 nova_compute[189279]: 2025-12-10 19:54:35.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:54:35 compute-0 nova_compute[189279]: 2025-12-10 19:54:35.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 10 19:54:35 compute-0 nova_compute[189279]: 2025-12-10 19:54:35.504 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 10 19:54:35 compute-0 nova_compute[189279]: 2025-12-10 19:54:35.505 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:54:36 compute-0 nova_compute[189279]: 2025-12-10 19:54:36.346 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:36 compute-0 nova_compute[189279]: 2025-12-10 19:54:36.364 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:40 compute-0 nova_compute[189279]: 2025-12-10 19:54:40.511 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:54:40 compute-0 nova_compute[189279]: 2025-12-10 19:54:40.552 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:54:41 compute-0 podman[240124]: 2025-12-10 19:54:41.108741324 +0000 UTC m=+0.090262186 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec 10 19:54:41 compute-0 podman[240126]: 2025-12-10 19:54:41.122737996 +0000 UTC m=+0.095110172 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vendor=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, version=9.4, architecture=x86_64, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 10 19:54:41 compute-0 podman[240125]: 2025-12-10 19:54:41.13505532 +0000 UTC m=+0.107660862 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Dec 10 19:54:41 compute-0 nova_compute[189279]: 2025-12-10 19:54:41.348 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:41 compute-0 nova_compute[189279]: 2025-12-10 19:54:41.366 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:42 compute-0 nova_compute[189279]: 2025-12-10 19:54:42.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:54:42 compute-0 nova_compute[189279]: 2025-12-10 19:54:42.490 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:54:42 compute-0 nova_compute[189279]: 2025-12-10 19:54:42.491 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:54:42 compute-0 nova_compute[189279]: 2025-12-10 19:54:42.492 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 19:54:43 compute-0 nova_compute[189279]: 2025-12-10 19:54:43.091 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:54:43 compute-0 nova_compute[189279]: 2025-12-10 19:54:43.092 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:54:43 compute-0 nova_compute[189279]: 2025-12-10 19:54:43.092 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 19:54:43 compute-0 nova_compute[189279]: 2025-12-10 19:54:43.093 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12986b74-7b15-4ff4-9019-081950660d4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 19:54:44 compute-0 nova_compute[189279]: 2025-12-10 19:54:44.737 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updating instance_info_cache with network_info: [{"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:54:44 compute-0 nova_compute[189279]: 2025-12-10 19:54:44.759 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:54:44 compute-0 nova_compute[189279]: 2025-12-10 19:54:44.760 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 19:54:44 compute-0 nova_compute[189279]: 2025-12-10 19:54:44.761 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:54:44 compute-0 nova_compute[189279]: 2025-12-10 19:54:44.762 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:54:44 compute-0 nova_compute[189279]: 2025-12-10 19:54:44.762 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:54:44 compute-0 nova_compute[189279]: 2025-12-10 19:54:44.763 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:54:44 compute-0 nova_compute[189279]: 2025-12-10 19:54:44.763 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:54:45 compute-0 nova_compute[189279]: 2025-12-10 19:54:45.490 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:54:46 compute-0 nova_compute[189279]: 2025-12-10 19:54:46.350 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:46 compute-0 nova_compute[189279]: 2025-12-10 19:54:46.369 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:46 compute-0 nova_compute[189279]: 2025-12-10 19:54:46.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:54:46 compute-0 nova_compute[189279]: 2025-12-10 19:54:46.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 10 19:54:47 compute-0 ovn_controller[97701]: 2025-12-10T19:54:47Z|00039|memory_trim|INFO|Detected inactivity (last active 30021 ms ago): trimming memory
Dec 10 19:54:47 compute-0 podman[240177]: 2025-12-10 19:54:47.103329516 +0000 UTC m=+0.074631818 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 10 19:54:47 compute-0 podman[240178]: 2025-12-10 19:54:47.107994587 +0000 UTC m=+0.071589184 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 19:54:47 compute-0 nova_compute[189279]: 2025-12-10 19:54:47.505 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:54:47 compute-0 nova_compute[189279]: 2025-12-10 19:54:47.529 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:54:47 compute-0 nova_compute[189279]: 2025-12-10 19:54:47.529 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:54:47 compute-0 nova_compute[189279]: 2025-12-10 19:54:47.529 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:54:47 compute-0 nova_compute[189279]: 2025-12-10 19:54:47.530 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:54:47 compute-0 nova_compute[189279]: 2025-12-10 19:54:47.620 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:47 compute-0 nova_compute[189279]: 2025-12-10 19:54:47.694 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:47 compute-0 nova_compute[189279]: 2025-12-10 19:54:47.697 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:47 compute-0 nova_compute[189279]: 2025-12-10 19:54:47.774 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:47 compute-0 nova_compute[189279]: 2025-12-10 19:54:47.776 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:47 compute-0 nova_compute[189279]: 2025-12-10 19:54:47.848 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:47 compute-0 nova_compute[189279]: 2025-12-10 19:54:47.850 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:47 compute-0 nova_compute[189279]: 2025-12-10 19:54:47.920 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:47 compute-0 nova_compute[189279]: 2025-12-10 19:54:47.929 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.001 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.004 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.088 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.090 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.145 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.147 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.206 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.518 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.520 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5157MB free_disk=72.37416076660156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.521 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.521 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.747 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.748 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ac2c8050-72b5-419c-ba99-c4feeb26147a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.748 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.749 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.917 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.934 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.958 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:54:48 compute-0 nova_compute[189279]: 2025-12-10 19:54:48.959 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:54:50 compute-0 podman[240243]: 2025-12-10 19:54:50.138220506 +0000 UTC m=+0.112781925 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 10 19:54:51 compute-0 nova_compute[189279]: 2025-12-10 19:54:51.352 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:51 compute-0 nova_compute[189279]: 2025-12-10 19:54:51.372 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:53 compute-0 ovn_controller[97701]: 2025-12-10T19:54:53Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:af:37:97 192.168.0.123
Dec 10 19:54:53 compute-0 ovn_controller[97701]: 2025-12-10T19:54:53Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:37:97 192.168.0.123
Dec 10 19:54:56 compute-0 podman[240287]: 2025-12-10 19:54:56.117964949 +0000 UTC m=+0.096207893 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 10 19:54:56 compute-0 nova_compute[189279]: 2025-12-10 19:54:56.356 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:56 compute-0 nova_compute[189279]: 2025-12-10 19:54:56.373 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:54:59 compute-0 podman[203484]: time="2025-12-10T19:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:54:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 19:54:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4785 "" "Go-http-client/1.1"
Dec 10 19:55:01 compute-0 nova_compute[189279]: 2025-12-10 19:55:01.358 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:01 compute-0 nova_compute[189279]: 2025-12-10 19:55:01.376 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:01 compute-0 openstack_network_exporter[205632]: ERROR   19:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:55:01 compute-0 openstack_network_exporter[205632]: ERROR   19:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:55:01 compute-0 openstack_network_exporter[205632]: ERROR   19:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:55:01 compute-0 openstack_network_exporter[205632]: ERROR   19:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:55:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:55:01 compute-0 openstack_network_exporter[205632]: ERROR   19:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:55:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:55:04 compute-0 podman[240307]: 2025-12-10 19:55:04.114383989 +0000 UTC m=+0.095502872 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, release=1755695350, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 10 19:55:06 compute-0 podman[240327]: 2025-12-10 19:55:06.102794338 +0000 UTC m=+0.073114525 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 19:55:06 compute-0 nova_compute[189279]: 2025-12-10 19:55:06.359 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:06 compute-0 nova_compute[189279]: 2025-12-10 19:55:06.378 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:11 compute-0 nova_compute[189279]: 2025-12-10 19:55:11.361 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:11 compute-0 nova_compute[189279]: 2025-12-10 19:55:11.381 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:12 compute-0 podman[240351]: 2025-12-10 19:55:12.10064992 +0000 UTC m=+0.085407410 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:55:12 compute-0 podman[240353]: 2025-12-10 19:55:12.119009093 +0000 UTC m=+0.087726305 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, version=9.4, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 10 19:55:12 compute-0 podman[240352]: 2025-12-10 19:55:12.140548926 +0000 UTC m=+0.104020731 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm)
Dec 10 19:55:16 compute-0 nova_compute[189279]: 2025-12-10 19:55:16.364 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:16 compute-0 nova_compute[189279]: 2025-12-10 19:55:16.383 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:18 compute-0 podman[240410]: 2025-12-10 19:55:18.124067444 +0000 UTC m=+0.091401028 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 19:55:18 compute-0 podman[240409]: 2025-12-10 19:55:18.16397291 +0000 UTC m=+0.127008934 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251202)
Dec 10 19:55:21 compute-0 podman[240455]: 2025-12-10 19:55:21.111568901 +0000 UTC m=+0.097501768 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 19:55:21 compute-0 nova_compute[189279]: 2025-12-10 19:55:21.369 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:21 compute-0 nova_compute[189279]: 2025-12-10 19:55:21.386 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:55:23.368 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:55:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:55:23.369 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:55:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:55:23.369 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:55:25 compute-0 nova_compute[189279]: 2025-12-10 19:55:25.750 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:55:25 compute-0 nova_compute[189279]: 2025-12-10 19:55:25.779 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Triggering sync for uuid 12986b74-7b15-4ff4-9019-081950660d4b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 10 19:55:25 compute-0 nova_compute[189279]: 2025-12-10 19:55:25.780 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Triggering sync for uuid ac2c8050-72b5-419c-ba99-c4feeb26147a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 10 19:55:25 compute-0 nova_compute[189279]: 2025-12-10 19:55:25.781 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "12986b74-7b15-4ff4-9019-081950660d4b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:55:25 compute-0 nova_compute[189279]: 2025-12-10 19:55:25.781 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "12986b74-7b15-4ff4-9019-081950660d4b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:55:25 compute-0 nova_compute[189279]: 2025-12-10 19:55:25.782 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "ac2c8050-72b5-419c-ba99-c4feeb26147a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:55:25 compute-0 nova_compute[189279]: 2025-12-10 19:55:25.783 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:55:25 compute-0 nova_compute[189279]: 2025-12-10 19:55:25.825 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "12986b74-7b15-4ff4-9019-081950660d4b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:55:25 compute-0 nova_compute[189279]: 2025-12-10 19:55:25.827 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:55:26 compute-0 nova_compute[189279]: 2025-12-10 19:55:26.371 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:26 compute-0 nova_compute[189279]: 2025-12-10 19:55:26.389 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:27 compute-0 podman[240479]: 2025-12-10 19:55:27.123432566 +0000 UTC m=+0.106999804 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 10 19:55:29 compute-0 podman[203484]: time="2025-12-10T19:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:55:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 19:55:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4796 "" "Go-http-client/1.1"
Dec 10 19:55:31 compute-0 nova_compute[189279]: 2025-12-10 19:55:31.373 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:31 compute-0 nova_compute[189279]: 2025-12-10 19:55:31.390 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:31 compute-0 openstack_network_exporter[205632]: ERROR   19:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:55:31 compute-0 openstack_network_exporter[205632]: ERROR   19:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:55:31 compute-0 openstack_network_exporter[205632]: ERROR   19:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:55:31 compute-0 openstack_network_exporter[205632]: ERROR   19:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:55:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:55:31 compute-0 openstack_network_exporter[205632]: ERROR   19:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:55:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:55:35 compute-0 podman[240500]: 2025-12-10 19:55:35.12219356 +0000 UTC m=+0.100812501 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, architecture=x86_64, distribution-scope=public, release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible)
Dec 10 19:55:36 compute-0 nova_compute[189279]: 2025-12-10 19:55:36.376 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:36 compute-0 nova_compute[189279]: 2025-12-10 19:55:36.393 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:37 compute-0 podman[240518]: 2025-12-10 19:55:37.090400895 +0000 UTC m=+0.070830343 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 19:55:41 compute-0 nova_compute[189279]: 2025-12-10 19:55:41.378 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:41 compute-0 nova_compute[189279]: 2025-12-10 19:55:41.395 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:41 compute-0 nova_compute[189279]: 2025-12-10 19:55:41.521 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.172 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.172 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.173 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.180 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '12986b74-7b15-4ff4-9019-081950660d4b', 'name': 'test_0', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.182 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance ac2c8050-72b5-419c-ba99-c4feeb26147a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 10 19:55:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:42.183 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/ac2c8050-72b5-419c-ba99-c4feeb26147a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a6b7809c80638f6e016296d2f243706fded356213cefb5a3f70c31b120afa2c9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 10 19:55:42 compute-0 nova_compute[189279]: 2025-12-10 19:55:42.482 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:55:43 compute-0 podman[240544]: 2025-12-10 19:55:43.095190872 +0000 UTC m=+0.075930535 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 10 19:55:43 compute-0 podman[240543]: 2025-12-10 19:55:43.112689742 +0000 UTC m=+0.097291423 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 10 19:55:43 compute-0 podman[240545]: 2025-12-10 19:55:43.12796796 +0000 UTC m=+0.105549455 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, architecture=x86_64, release-0.7.12=, io.buildah.version=1.29.0, config_id=edpm, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, name=ubi9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.305 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 10 Dec 2025 19:55:42 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-3f2cf506-2c20-48d9-bc1b-29a266afa2bb x-openstack-request-id: req-3f2cf506-2c20-48d9-bc1b-29a266afa2bb _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.305 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "ac2c8050-72b5-419c-ba99-c4feeb26147a", "name": "vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf", "status": "ACTIVE", "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "user_id": "2143e69e49fd49db99c8737c973c1ea5", "metadata": {"metering.server_group": "9d7a68be-d216-4b06-b611-878d356c6d68"}, "hostId": "dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852", "image": {"id": "06e6231d-0a77-4b09-acb3-e7faf5a777be", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/06e6231d-0a77-4b09-acb3-e7faf5a777be"}]}, "flavor": {"id": "0fc2e5b1-b522-4c52-bdef-97db09e458e4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/0fc2e5b1-b522-4c52-bdef-97db09e458e4"}]}, "created": "2025-12-10T19:54:12Z", "updated": "2025-12-10T19:54:17Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.123", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:af:37:97"}, {"version": 4, "addr": "192.168.122.185", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:af:37:97"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/ac2c8050-72b5-419c-ba99-c4feeb26147a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/ac2c8050-72b5-419c-ba99-c4feeb26147a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-10T19:54:17.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.305 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/ac2c8050-72b5-419c-ba99-c4feeb26147a used request id req-3f2cf506-2c20-48d9-bc1b-29a266afa2bb request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.307 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ac2c8050-72b5-419c-ba99-c4feeb26147a', 'name': 'vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {'metering.server_group': '9d7a68be-d216-4b06-b611-878d356c6d68'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.307 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.308 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.309 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T19:55:43.308341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.310 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T19:55:43.310460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.333 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.334 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.334 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.356 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.357 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.369 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.369 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.369 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.370 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.370 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.370 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.370 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.371 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.371 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.371 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T19:55:43.370193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.371 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.371 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T19:55:43.371316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.375 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.379 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for ac2c8050-72b5-419c-ba99-c4feeb26147a / tap5d3f5317-70 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.379 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.380 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.380 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.380 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.381 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.381 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.381 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.381 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T19:55:43.381310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.382 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.382 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.383 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.383 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T19:55:43.383545) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.384 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.384 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.384 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.384 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.384 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.384 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.384 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.385 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.385 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T19:55:43.384929) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.385 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.bytes volume: 4554 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.385 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.385 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.386 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.386 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes.delta volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.386 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.387 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T19:55:43.386155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.387 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.387 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.387 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T19:55:43.387308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.406 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.427 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/memory.usage volume: 49.1015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.428 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.428 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.428 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.429 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.429 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.429 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.429 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.430 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-10T19:55:43.429554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.430 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf>]
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.430 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.430 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.430 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.430 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.431 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.431 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T19:55:43.431009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.431 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.432 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.432 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.432 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.432 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T19:55:43.432278) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.433 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.433 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.433 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.433 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.434 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.434 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.434 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.434 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T19:55:43.434309) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.434 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.packets volume: 38 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.435 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.435 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.435 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.435 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T19:55:43.435506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.435 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.436 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.436 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.436 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.436 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T19:55:43.436627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.436 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.437 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.437 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.437 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.438 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.438 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.438 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T19:55:43.438600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.495 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.496 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.496 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.553 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.554 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.555 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.556 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.556 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.557 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/cpu volume: 35660000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.557 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T19:55:43.557258) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.558 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/cpu volume: 40110000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.559 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.560 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 425951231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T19:55:43.559875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.560 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 63853652 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.560 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 49706577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.561 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.latency volume: 365261803 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.561 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.latency volume: 76908904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.561 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.latency volume: 59898361 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.562 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.562 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.563 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.563 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.563 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.564 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T19:55:43.563661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.564 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.564 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.565 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.565 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.565 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.566 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.567 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T19:55:43.567563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.568 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.568 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.568 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.569 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.569 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.570 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.571 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T19:55:43.571453) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.572 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.572 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.572 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.573 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.573 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.574 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.575 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T19:55:43.575324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.576 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.577 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.578 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 816753194 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T19:55:43.577959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.578 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 10242364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.579 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.579 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.latency volume: 1274016706 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.580 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.latency volume: 10530105 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.580 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.581 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.582 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.582 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T19:55:43.582294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.582 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.583 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.583 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.584 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.584 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.586 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes.delta volume: 1878 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.586 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T19:55:43.586568) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.587 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.588 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.589 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-10T19:55:43.588891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.589 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf>]
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:55:43.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:55:44 compute-0 nova_compute[189279]: 2025-12-10 19:55:44.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:55:44 compute-0 nova_compute[189279]: 2025-12-10 19:55:44.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:55:45 compute-0 nova_compute[189279]: 2025-12-10 19:55:45.154 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:55:45 compute-0 nova_compute[189279]: 2025-12-10 19:55:45.155 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:55:45 compute-0 nova_compute[189279]: 2025-12-10 19:55:45.155 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 19:55:46 compute-0 nova_compute[189279]: 2025-12-10 19:55:46.380 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:46 compute-0 nova_compute[189279]: 2025-12-10 19:55:46.396 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:47 compute-0 nova_compute[189279]: 2025-12-10 19:55:47.179 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Updating instance_info_cache with network_info: [{"id": "5d3f5317-707c-4080-a612-71018c7ba2ed", "address": "fa:16:3e:af:37:97", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d3f5317-70", "ovs_interfaceid": "5d3f5317-707c-4080-a612-71018c7ba2ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:55:47 compute-0 nova_compute[189279]: 2025-12-10 19:55:47.311 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:55:47 compute-0 nova_compute[189279]: 2025-12-10 19:55:47.312 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 19:55:47 compute-0 nova_compute[189279]: 2025-12-10 19:55:47.312 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:55:47 compute-0 nova_compute[189279]: 2025-12-10 19:55:47.313 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:55:47 compute-0 nova_compute[189279]: 2025-12-10 19:55:47.313 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:55:47 compute-0 nova_compute[189279]: 2025-12-10 19:55:47.313 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:55:47 compute-0 nova_compute[189279]: 2025-12-10 19:55:47.314 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:55:47 compute-0 nova_compute[189279]: 2025-12-10 19:55:47.314 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:55:48 compute-0 nova_compute[189279]: 2025-12-10 19:55:48.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:55:48 compute-0 nova_compute[189279]: 2025-12-10 19:55:48.541 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:55:48 compute-0 nova_compute[189279]: 2025-12-10 19:55:48.542 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:55:48 compute-0 nova_compute[189279]: 2025-12-10 19:55:48.542 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:55:48 compute-0 nova_compute[189279]: 2025-12-10 19:55:48.542 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:55:48 compute-0 nova_compute[189279]: 2025-12-10 19:55:48.867 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:55:48 compute-0 nova_compute[189279]: 2025-12-10 19:55:48.953 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:55:48 compute-0 nova_compute[189279]: 2025-12-10 19:55:48.955 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.023 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.025 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:55:49 compute-0 podman[240600]: 2025-12-10 19:55:49.082611695 +0000 UTC m=+0.062376306 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.089 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.090 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:55:49 compute-0 podman[240601]: 2025-12-10 19:55:49.153977661 +0000 UTC m=+0.112735835 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.165 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.174 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.250 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.251 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.314 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.315 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.377 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.379 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.440 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.769 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.771 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5068MB free_disk=72.35262298583984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.772 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:55:49 compute-0 nova_compute[189279]: 2025-12-10 19:55:49.772 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:55:50 compute-0 nova_compute[189279]: 2025-12-10 19:55:50.135 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 19:55:50 compute-0 nova_compute[189279]: 2025-12-10 19:55:50.136 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ac2c8050-72b5-419c-ba99-c4feeb26147a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 19:55:50 compute-0 nova_compute[189279]: 2025-12-10 19:55:50.136 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:55:50 compute-0 nova_compute[189279]: 2025-12-10 19:55:50.136 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:55:50 compute-0 nova_compute[189279]: 2025-12-10 19:55:50.196 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:55:50 compute-0 nova_compute[189279]: 2025-12-10 19:55:50.211 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:55:50 compute-0 nova_compute[189279]: 2025-12-10 19:55:50.212 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:55:50 compute-0 nova_compute[189279]: 2025-12-10 19:55:50.213 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:55:51 compute-0 nova_compute[189279]: 2025-12-10 19:55:51.383 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:51 compute-0 nova_compute[189279]: 2025-12-10 19:55:51.399 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:52 compute-0 podman[240662]: 2025-12-10 19:55:52.136176089 +0000 UTC m=+0.121718094 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 10 19:55:56 compute-0 nova_compute[189279]: 2025-12-10 19:55:56.386 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:56 compute-0 nova_compute[189279]: 2025-12-10 19:55:56.400 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:55:58 compute-0 podman[240688]: 2025-12-10 19:55:58.111326229 +0000 UTC m=+0.084237347 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 10 19:55:59 compute-0 podman[203484]: time="2025-12-10T19:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:55:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 19:55:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4792 "" "Go-http-client/1.1"
Dec 10 19:56:01 compute-0 nova_compute[189279]: 2025-12-10 19:56:01.389 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:01 compute-0 nova_compute[189279]: 2025-12-10 19:56:01.403 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:01 compute-0 openstack_network_exporter[205632]: ERROR   19:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:56:01 compute-0 openstack_network_exporter[205632]: ERROR   19:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:56:01 compute-0 openstack_network_exporter[205632]: ERROR   19:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:56:01 compute-0 openstack_network_exporter[205632]: ERROR   19:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:56:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:56:01 compute-0 openstack_network_exporter[205632]: ERROR   19:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:56:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:56:06 compute-0 podman[240708]: 2025-12-10 19:56:06.111847641 +0000 UTC m=+0.097945921 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Dec 10 19:56:06 compute-0 nova_compute[189279]: 2025-12-10 19:56:06.390 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:06 compute-0 nova_compute[189279]: 2025-12-10 19:56:06.404 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:08 compute-0 podman[240729]: 2025-12-10 19:56:08.092467644 +0000 UTC m=+0.069821864 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 19:56:11 compute-0 nova_compute[189279]: 2025-12-10 19:56:11.394 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:11 compute-0 nova_compute[189279]: 2025-12-10 19:56:11.407 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:14 compute-0 podman[240753]: 2025-12-10 19:56:14.095429841 +0000 UTC m=+0.077741835 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 10 19:56:14 compute-0 podman[240755]: 2025-12-10 19:56:14.129537944 +0000 UTC m=+0.103680271 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, name=ubi9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, container_name=kepler, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, managed_by=edpm_ansible)
Dec 10 19:56:14 compute-0 podman[240754]: 2025-12-10 19:56:14.134432272 +0000 UTC m=+0.112011124 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 10 19:56:16 compute-0 nova_compute[189279]: 2025-12-10 19:56:16.395 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:16 compute-0 nova_compute[189279]: 2025-12-10 19:56:16.410 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:19 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 10 19:56:19 compute-0 podman[240812]: 2025-12-10 19:56:19.5311315 +0000 UTC m=+0.076757698 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 10 19:56:19 compute-0 podman[240813]: 2025-12-10 19:56:19.548811095 +0000 UTC m=+0.090431831 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 19:56:21 compute-0 nova_compute[189279]: 2025-12-10 19:56:21.400 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:21 compute-0 nova_compute[189279]: 2025-12-10 19:56:21.412 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:23 compute-0 podman[240855]: 2025-12-10 19:56:23.136313125 +0000 UTC m=+0.118368171 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 10 19:56:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:56:23.368 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:56:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:56:23.369 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:56:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:56:23.370 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:56:26 compute-0 nova_compute[189279]: 2025-12-10 19:56:26.405 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:26 compute-0 nova_compute[189279]: 2025-12-10 19:56:26.413 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:29 compute-0 podman[240881]: 2025-12-10 19:56:29.094175751 +0000 UTC m=+0.068480937 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251210, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 10 19:56:29 compute-0 podman[203484]: time="2025-12-10T19:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:56:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 19:56:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4790 "" "Go-http-client/1.1"
Dec 10 19:56:31 compute-0 nova_compute[189279]: 2025-12-10 19:56:31.407 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:31 compute-0 nova_compute[189279]: 2025-12-10 19:56:31.414 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:31 compute-0 openstack_network_exporter[205632]: ERROR   19:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:56:31 compute-0 openstack_network_exporter[205632]: ERROR   19:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:56:31 compute-0 openstack_network_exporter[205632]: ERROR   19:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:56:31 compute-0 openstack_network_exporter[205632]: ERROR   19:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:56:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:56:31 compute-0 openstack_network_exporter[205632]: ERROR   19:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:56:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:56:36 compute-0 nova_compute[189279]: 2025-12-10 19:56:36.410 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:36 compute-0 nova_compute[189279]: 2025-12-10 19:56:36.415 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:37 compute-0 podman[240900]: 2025-12-10 19:56:37.128080008 +0000 UTC m=+0.102856448 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, release=1755695350, config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, container_name=openstack_network_exporter)
Dec 10 19:56:39 compute-0 podman[240921]: 2025-12-10 19:56:39.083902457 +0000 UTC m=+0.066450289 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 19:56:41 compute-0 nova_compute[189279]: 2025-12-10 19:56:41.412 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:41 compute-0 nova_compute[189279]: 2025-12-10 19:56:41.417 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:44 compute-0 nova_compute[189279]: 2025-12-10 19:56:44.214 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:56:44 compute-0 nova_compute[189279]: 2025-12-10 19:56:44.215 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:56:44 compute-0 nova_compute[189279]: 2025-12-10 19:56:44.236 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:56:44 compute-0 nova_compute[189279]: 2025-12-10 19:56:44.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:56:44 compute-0 nova_compute[189279]: 2025-12-10 19:56:44.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:56:44 compute-0 nova_compute[189279]: 2025-12-10 19:56:44.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 19:56:44 compute-0 podman[240947]: 2025-12-10 19:56:44.780062831 +0000 UTC m=+0.090616815 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-container, release-0.7.12=, vcs-type=git, version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc.)
Dec 10 19:56:44 compute-0 podman[240946]: 2025-12-10 19:56:44.78181236 +0000 UTC m=+0.096030197 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 10 19:56:44 compute-0 podman[240945]: 2025-12-10 19:56:44.791975754 +0000 UTC m=+0.111855290 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 10 19:56:45 compute-0 nova_compute[189279]: 2025-12-10 19:56:45.157 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:56:45 compute-0 nova_compute[189279]: 2025-12-10 19:56:45.158 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:56:45 compute-0 nova_compute[189279]: 2025-12-10 19:56:45.158 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 19:56:45 compute-0 nova_compute[189279]: 2025-12-10 19:56:45.158 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12986b74-7b15-4ff4-9019-081950660d4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 19:56:46 compute-0 nova_compute[189279]: 2025-12-10 19:56:46.414 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:46 compute-0 nova_compute[189279]: 2025-12-10 19:56:46.419 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:46 compute-0 nova_compute[189279]: 2025-12-10 19:56:46.558 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updating instance_info_cache with network_info: [{"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:56:46 compute-0 nova_compute[189279]: 2025-12-10 19:56:46.572 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:56:46 compute-0 nova_compute[189279]: 2025-12-10 19:56:46.573 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 19:56:46 compute-0 nova_compute[189279]: 2025-12-10 19:56:46.573 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:56:46 compute-0 nova_compute[189279]: 2025-12-10 19:56:46.574 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:56:46 compute-0 nova_compute[189279]: 2025-12-10 19:56:46.574 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:56:46 compute-0 nova_compute[189279]: 2025-12-10 19:56:46.574 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:56:47 compute-0 nova_compute[189279]: 2025-12-10 19:56:47.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:56:47 compute-0 nova_compute[189279]: 2025-12-10 19:56:47.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.512 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.513 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.513 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.513 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.588 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.649 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.650 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.707 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.708 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.764 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.766 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.824 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.831 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.902 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.903 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.972 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:56:48 compute-0 nova_compute[189279]: 2025-12-10 19:56:48.973 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.034 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.035 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.095 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.441 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.443 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4986MB free_disk=72.35262298583984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.444 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.444 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.518 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.518 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ac2c8050-72b5-419c-ba99-c4feeb26147a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.519 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.519 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.595 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.614 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.616 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:56:49 compute-0 nova_compute[189279]: 2025-12-10 19:56:49.617 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:56:50 compute-0 podman[241028]: 2025-12-10 19:56:50.088672652 +0000 UTC m=+0.061612942 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 19:56:50 compute-0 podman[241027]: 2025-12-10 19:56:50.08897887 +0000 UTC m=+0.065534227 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 10 19:56:51 compute-0 nova_compute[189279]: 2025-12-10 19:56:51.416 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:51 compute-0 nova_compute[189279]: 2025-12-10 19:56:51.420 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:54 compute-0 podman[241068]: 2025-12-10 19:56:54.114113562 +0000 UTC m=+0.097996172 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec 10 19:56:56 compute-0 nova_compute[189279]: 2025-12-10 19:56:56.418 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:56 compute-0 nova_compute[189279]: 2025-12-10 19:56:56.422 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:56:59 compute-0 podman[203484]: time="2025-12-10T19:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:56:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 19:56:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4796 "" "Go-http-client/1.1"
Dec 10 19:57:00 compute-0 podman[241093]: 2025-12-10 19:57:00.123850206 +0000 UTC m=+0.106152311 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251210)
Dec 10 19:57:01 compute-0 openstack_network_exporter[205632]: ERROR   19:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:57:01 compute-0 openstack_network_exporter[205632]: ERROR   19:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:57:01 compute-0 openstack_network_exporter[205632]: ERROR   19:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:57:01 compute-0 openstack_network_exporter[205632]: ERROR   19:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:57:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:57:01 compute-0 openstack_network_exporter[205632]: ERROR   19:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:57:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:57:01 compute-0 nova_compute[189279]: 2025-12-10 19:57:01.419 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:01 compute-0 nova_compute[189279]: 2025-12-10 19:57:01.424 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:06 compute-0 nova_compute[189279]: 2025-12-10 19:57:06.421 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:06 compute-0 nova_compute[189279]: 2025-12-10 19:57:06.427 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:08 compute-0 podman[241112]: 2025-12-10 19:57:08.086024787 +0000 UTC m=+0.062016323 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, version=9.6)
Dec 10 19:57:10 compute-0 podman[241133]: 2025-12-10 19:57:10.08438214 +0000 UTC m=+0.068415765 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 19:57:11 compute-0 nova_compute[189279]: 2025-12-10 19:57:11.424 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:11 compute-0 nova_compute[189279]: 2025-12-10 19:57:11.429 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:15 compute-0 podman[241156]: 2025-12-10 19:57:15.073544616 +0000 UTC m=+0.058817426 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 10 19:57:15 compute-0 podman[241157]: 2025-12-10 19:57:15.087376829 +0000 UTC m=+0.067526270 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 10 19:57:15 compute-0 podman[241158]: 2025-12-10 19:57:15.102131877 +0000 UTC m=+0.074222211 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, io.openshift.expose-services=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 10 19:57:16 compute-0 nova_compute[189279]: 2025-12-10 19:57:16.427 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:16 compute-0 nova_compute[189279]: 2025-12-10 19:57:16.430 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:21 compute-0 podman[241215]: 2025-12-10 19:57:21.097402402 +0000 UTC m=+0.076468742 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 10 19:57:21 compute-0 podman[241216]: 2025-12-10 19:57:21.116793384 +0000 UTC m=+0.093481039 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 19:57:21 compute-0 nova_compute[189279]: 2025-12-10 19:57:21.429 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:21 compute-0 nova_compute[189279]: 2025-12-10 19:57:21.432 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:22 compute-0 sshd-session[241257]: error: kex_exchange_identification: read: Connection reset by peer
Dec 10 19:57:22 compute-0 sshd-session[241257]: Connection reset by 45.140.17.97 port 43667
Dec 10 19:57:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:57:23.371 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:57:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:57:23.372 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:57:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:57:23.373 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:57:25 compute-0 podman[241258]: 2025-12-10 19:57:25.15803405 +0000 UTC m=+0.132575193 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:57:26 compute-0 nova_compute[189279]: 2025-12-10 19:57:26.432 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:26 compute-0 nova_compute[189279]: 2025-12-10 19:57:26.435 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:29 compute-0 podman[203484]: time="2025-12-10T19:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:57:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 19:57:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4795 "" "Go-http-client/1.1"
Dec 10 19:57:31 compute-0 podman[241284]: 2025-12-10 19:57:31.094627524 +0000 UTC m=+0.077958282 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251210, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 10 19:57:31 compute-0 openstack_network_exporter[205632]: ERROR   19:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:57:31 compute-0 openstack_network_exporter[205632]: ERROR   19:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:57:31 compute-0 openstack_network_exporter[205632]: ERROR   19:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:57:31 compute-0 openstack_network_exporter[205632]: ERROR   19:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:57:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:57:31 compute-0 openstack_network_exporter[205632]: ERROR   19:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:57:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:57:31 compute-0 nova_compute[189279]: 2025-12-10 19:57:31.433 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:31 compute-0 nova_compute[189279]: 2025-12-10 19:57:31.437 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:36 compute-0 nova_compute[189279]: 2025-12-10 19:57:36.435 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:39 compute-0 podman[241302]: 2025-12-10 19:57:39.129932835 +0000 UTC m=+0.105263407 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 19:57:41 compute-0 podman[241324]: 2025-12-10 19:57:41.079143464 +0000 UTC m=+0.062392923 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:57:41 compute-0 nova_compute[189279]: 2025-12-10 19:57:41.437 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:41 compute-0 nova_compute[189279]: 2025-12-10 19:57:41.440 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.173 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.173 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.176 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.182 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '12986b74-7b15-4ff4-9019-081950660d4b', 'name': 'test_0', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.187 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ac2c8050-72b5-419c-ba99-c4feeb26147a', 'name': 'vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {'metering.server_group': '9d7a68be-d216-4b06-b611-878d356c6d68'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.187 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.187 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.189 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.190 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.190 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.190 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T19:57:42.188218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T19:57:42.190955) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.215 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.216 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.216 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.238 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.239 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.239 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.240 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.241 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.241 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.241 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.242 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.242 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T19:57:42.242249) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.243 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.244 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.244 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.244 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.244 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.245 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T19:57:42.244976) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.245 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.251 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.256 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.256 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.257 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.257 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.258 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.258 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.259 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.259 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.260 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.260 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.261 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T19:57:42.258043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.261 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.261 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.261 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.262 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.262 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T19:57:42.261523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.263 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.263 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.264 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.264 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.264 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.264 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T19:57:42.264375) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.264 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.265 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.bytes volume: 4736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.266 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.266 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.266 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.266 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.267 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.267 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.267 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.bytes.delta volume: 182 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.268 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.269 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T19:57:42.267037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.269 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.269 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T19:57:42.270154) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.289 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.312 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/memory.usage volume: 49.09375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.313 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.313 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.313 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.313 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.314 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.314 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T19:57:42.313905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.314 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.315 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T19:57:42.316149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.316 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.317 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.317 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.317 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.318 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.318 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.319 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.319 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.319 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.320 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.320 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.320 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T19:57:42.320107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.321 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.packets volume: 41 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.321 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.321 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.322 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T19:57:42.322356) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.322 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.324 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T19:57:42.324405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.324 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.325 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.325 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.326 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.326 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.326 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.326 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T19:57:42.326343) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.388 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.388 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.389 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.452 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.453 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.453 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.454 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.454 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.454 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.454 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.454 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.455 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/cpu volume: 36960000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.455 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/cpu volume: 158610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.455 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T19:57:42.454887) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.456 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.456 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.456 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 425951231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.457 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 63853652 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.457 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 49706577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.457 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.latency volume: 365261803 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.457 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.latency volume: 76908904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.458 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.latency volume: 59898361 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.458 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T19:57:42.456704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.459 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.459 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.459 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.459 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.459 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.459 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.460 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.460 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.460 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T19:57:42.459433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.461 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.461 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.461 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.462 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.462 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.462 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.462 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.462 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.463 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.463 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.463 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.464 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.464 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.464 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.464 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.464 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.464 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.465 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.465 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.465 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.465 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T19:57:42.462253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.465 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T19:57:42.464892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.465 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.466 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.466 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.466 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.467 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.467 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.467 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.467 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.467 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.467 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.468 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T19:57:42.467485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.468 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.468 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.468 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.469 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 816753194 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.469 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 10242364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.469 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.470 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.latency volume: 1282282265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.470 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.latency volume: 10530105 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.470 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.471 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.471 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.471 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.471 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.471 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.472 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T19:57:42.469131) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T19:57:42.471723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.472 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.472 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.473 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.473 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.473 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.474 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.474 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.474 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.474 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.474 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.474 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.474 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.bytes.delta volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.475 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.475 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.475 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.476 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T19:57:42.474429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.476 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.476 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.476 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.476 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.477 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.477 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.477 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.477 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.477 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.477 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.477 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.477 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.477 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.478 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.478 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.478 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.478 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.478 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.478 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.478 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.478 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.478 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.478 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.479 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.479 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:57:42.479 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:57:45 compute-0 nova_compute[189279]: 2025-12-10 19:57:45.617 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:57:45 compute-0 nova_compute[189279]: 2025-12-10 19:57:45.618 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:57:45 compute-0 nova_compute[189279]: 2025-12-10 19:57:45.619 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:57:46 compute-0 podman[241351]: 2025-12-10 19:57:46.105145852 +0000 UTC m=+0.077768646 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, release=1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container)
Dec 10 19:57:46 compute-0 podman[241350]: 2025-12-10 19:57:46.105332958 +0000 UTC m=+0.080012728 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 10 19:57:46 compute-0 podman[241349]: 2025-12-10 19:57:46.12545415 +0000 UTC m=+0.107252861 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 10 19:57:46 compute-0 nova_compute[189279]: 2025-12-10 19:57:46.440 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:57:46 compute-0 nova_compute[189279]: 2025-12-10 19:57:46.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:57:46 compute-0 nova_compute[189279]: 2025-12-10 19:57:46.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:57:46 compute-0 nova_compute[189279]: 2025-12-10 19:57:46.977 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:57:46 compute-0 nova_compute[189279]: 2025-12-10 19:57:46.978 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:57:46 compute-0 nova_compute[189279]: 2025-12-10 19:57:46.978 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.674 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Updating instance_info_cache with network_info: [{"id": "5d3f5317-707c-4080-a612-71018c7ba2ed", "address": "fa:16:3e:af:37:97", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d3f5317-70", "ovs_interfaceid": "5d3f5317-707c-4080-a612-71018c7ba2ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.693 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.694 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.695 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.695 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.696 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.696 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.696 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.725 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.727 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.728 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.729 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.821 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.889 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.890 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.947 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:57:48 compute-0 nova_compute[189279]: 2025-12-10 19:57:48.949 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.009 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.010 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.070 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.078 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.135 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.137 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.204 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.206 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.286 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.288 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.373 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.705 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.707 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4994MB free_disk=72.35264205932617GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.707 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.708 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.786 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.787 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ac2c8050-72b5-419c-ba99-c4feeb26147a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.787 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.788 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.853 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.869 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.871 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:57:49 compute-0 nova_compute[189279]: 2025-12-10 19:57:49.872 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:57:50 compute-0 nova_compute[189279]: 2025-12-10 19:57:50.664 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:57:51 compute-0 nova_compute[189279]: 2025-12-10 19:57:51.443 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 19:57:52 compute-0 podman[241429]: 2025-12-10 19:57:52.080792388 +0000 UTC m=+0.064237322 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 19:57:52 compute-0 podman[241428]: 2025-12-10 19:57:52.127153307 +0000 UTC m=+0.112148602 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 19:57:56 compute-0 podman[241468]: 2025-12-10 19:57:56.157728436 +0000 UTC m=+0.135640476 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:57:56 compute-0 nova_compute[189279]: 2025-12-10 19:57:56.446 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 19:57:59 compute-0 podman[203484]: time="2025-12-10T19:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:57:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 19:57:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4795 "" "Go-http-client/1.1"
Dec 10 19:58:01 compute-0 openstack_network_exporter[205632]: ERROR   19:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:58:01 compute-0 openstack_network_exporter[205632]: ERROR   19:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:58:01 compute-0 openstack_network_exporter[205632]: ERROR   19:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:58:01 compute-0 openstack_network_exporter[205632]: ERROR   19:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:58:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:58:01 compute-0 openstack_network_exporter[205632]: ERROR   19:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:58:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:58:01 compute-0 nova_compute[189279]: 2025-12-10 19:58:01.449 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:02 compute-0 podman[241492]: 2025-12-10 19:58:02.104443023 +0000 UTC m=+0.085028072 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Dec 10 19:58:06 compute-0 nova_compute[189279]: 2025-12-10 19:58:06.589 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:10 compute-0 podman[241511]: 2025-12-10 19:58:10.132890499 +0000 UTC m=+0.103303324 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, name=ubi9-minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 19:58:11 compute-0 nova_compute[189279]: 2025-12-10 19:58:11.454 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:11 compute-0 nova_compute[189279]: 2025-12-10 19:58:11.594 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:12 compute-0 podman[241531]: 2025-12-10 19:58:12.069349354 +0000 UTC m=+0.055546107 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 19:58:16 compute-0 nova_compute[189279]: 2025-12-10 19:58:16.457 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:16 compute-0 nova_compute[189279]: 2025-12-10 19:58:16.597 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:17 compute-0 podman[241554]: 2025-12-10 19:58:17.101819098 +0000 UTC m=+0.076824401 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 10 19:58:17 compute-0 podman[241555]: 2025-12-10 19:58:17.108495397 +0000 UTC m=+0.077244792 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=kepler, com.redhat.component=ubi9-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, build-date=2024-09-18T21:23:30, name=ubi9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, release=1214.1726694543)
Dec 10 19:58:17 compute-0 podman[241553]: 2025-12-10 19:58:17.128813225 +0000 UTC m=+0.103364186 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 10 19:58:21 compute-0 nova_compute[189279]: 2025-12-10 19:58:21.460 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:21 compute-0 nova_compute[189279]: 2025-12-10 19:58:21.600 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:23 compute-0 podman[241611]: 2025-12-10 19:58:23.095224262 +0000 UTC m=+0.066712458 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 19:58:23 compute-0 podman[241610]: 2025-12-10 19:58:23.119431175 +0000 UTC m=+0.096118511 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 10 19:58:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:58:23.373 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:58:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:58:23.373 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:58:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:58:23.374 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:58:26 compute-0 nova_compute[189279]: 2025-12-10 19:58:26.463 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:26 compute-0 nova_compute[189279]: 2025-12-10 19:58:26.603 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:27 compute-0 podman[241652]: 2025-12-10 19:58:27.141206166 +0000 UTC m=+0.114109896 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 10 19:58:29 compute-0 podman[203484]: time="2025-12-10T19:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:58:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 19:58:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Dec 10 19:58:31 compute-0 openstack_network_exporter[205632]: ERROR   19:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:58:31 compute-0 openstack_network_exporter[205632]: ERROR   19:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:58:31 compute-0 openstack_network_exporter[205632]: ERROR   19:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:58:31 compute-0 openstack_network_exporter[205632]: ERROR   19:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:58:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:58:31 compute-0 openstack_network_exporter[205632]: ERROR   19:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:58:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:58:31 compute-0 nova_compute[189279]: 2025-12-10 19:58:31.466 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:31 compute-0 nova_compute[189279]: 2025-12-10 19:58:31.605 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:33 compute-0 podman[241677]: 2025-12-10 19:58:33.121081316 +0000 UTC m=+0.099079011 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 10 19:58:36 compute-0 nova_compute[189279]: 2025-12-10 19:58:36.468 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:36 compute-0 nova_compute[189279]: 2025-12-10 19:58:36.607 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:41 compute-0 podman[241697]: 2025-12-10 19:58:41.090162332 +0000 UTC m=+0.071795955 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 10 19:58:41 compute-0 nova_compute[189279]: 2025-12-10 19:58:41.471 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:41 compute-0 nova_compute[189279]: 2025-12-10 19:58:41.610 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:43 compute-0 podman[241718]: 2025-12-10 19:58:43.09313921 +0000 UTC m=+0.067765946 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:58:46 compute-0 nova_compute[189279]: 2025-12-10 19:58:46.474 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:46 compute-0 nova_compute[189279]: 2025-12-10 19:58:46.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:58:46 compute-0 nova_compute[189279]: 2025-12-10 19:58:46.506 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:58:46 compute-0 nova_compute[189279]: 2025-12-10 19:58:46.612 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:47 compute-0 nova_compute[189279]: 2025-12-10 19:58:47.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:58:47 compute-0 nova_compute[189279]: 2025-12-10 19:58:47.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:58:48 compute-0 podman[241742]: 2025-12-10 19:58:48.124419261 +0000 UTC m=+0.101053353 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 10 19:58:48 compute-0 podman[241743]: 2025-12-10 19:58:48.125435689 +0000 UTC m=+0.091650730 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 10 19:58:48 compute-0 podman[241744]: 2025-12-10 19:58:48.145471548 +0000 UTC m=+0.110804326 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 10 19:58:48 compute-0 nova_compute[189279]: 2025-12-10 19:58:48.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:58:48 compute-0 nova_compute[189279]: 2025-12-10 19:58:48.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:58:48 compute-0 nova_compute[189279]: 2025-12-10 19:58:48.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 19:58:49 compute-0 nova_compute[189279]: 2025-12-10 19:58:49.214 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:58:49 compute-0 nova_compute[189279]: 2025-12-10 19:58:49.215 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:58:49 compute-0 nova_compute[189279]: 2025-12-10 19:58:49.215 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 19:58:49 compute-0 nova_compute[189279]: 2025-12-10 19:58:49.216 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12986b74-7b15-4ff4-9019-081950660d4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.244 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updating instance_info_cache with network_info: [{"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.259 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.260 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.261 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.261 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.261 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.262 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.262 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.263 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.291 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.292 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.293 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.293 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.379 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.444 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.445 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.476 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.503 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.507 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.567 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.568 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.615 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.626 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.633 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.702 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.703 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.774 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.780 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.843 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.845 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:58:51 compute-0 nova_compute[189279]: 2025-12-10 19:58:51.905 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.221 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.225 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4968MB free_disk=72.35268020629883GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.226 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.226 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.311 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.312 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ac2c8050-72b5-419c-ba99-c4feeb26147a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.312 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.313 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.329 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing inventories for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.345 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating ProviderTree inventory for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.346 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating inventory in ProviderTree for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.361 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing aggregate associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.392 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing trait associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, traits: COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.441 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.460 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.462 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:58:52 compute-0 nova_compute[189279]: 2025-12-10 19:58:52.462 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:58:54 compute-0 podman[241821]: 2025-12-10 19:58:54.078088246 +0000 UTC m=+0.056861404 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 19:58:54 compute-0 podman[241820]: 2025-12-10 19:58:54.082858204 +0000 UTC m=+0.065321291 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:58:55 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:58:55.532 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 19:58:55 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:58:55.534 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 19:58:55 compute-0 nova_compute[189279]: 2025-12-10 19:58:55.536 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:56 compute-0 nova_compute[189279]: 2025-12-10 19:58:56.478 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:56 compute-0 nova_compute[189279]: 2025-12-10 19:58:56.619 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:58:58 compute-0 podman[241858]: 2025-12-10 19:58:58.144428384 +0000 UTC m=+0.129208196 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 10 19:58:59 compute-0 podman[203484]: time="2025-12-10T19:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:58:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 19:58:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.232 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "51eb07cf-1168-4801-98e1-e0188e2c5f55" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.234 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.263 189283 DEBUG nova.compute.manager [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.363 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.364 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.373 189283 DEBUG nova.virt.hardware [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.374 189283 INFO nova.compute.claims [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Claim successful on node compute-0.ctlplane.example.com
Dec 10 19:59:01 compute-0 openstack_network_exporter[205632]: ERROR   19:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:59:01 compute-0 openstack_network_exporter[205632]: ERROR   19:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:59:01 compute-0 openstack_network_exporter[205632]: ERROR   19:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:59:01 compute-0 openstack_network_exporter[205632]: ERROR   19:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:59:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:59:01 compute-0 openstack_network_exporter[205632]: ERROR   19:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:59:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.480 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.528 189283 DEBUG nova.compute.provider_tree [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.543 189283 DEBUG nova.scheduler.client.report [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.564 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.565 189283 DEBUG nova.compute.manager [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.613 189283 DEBUG nova.compute.manager [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.614 189283 DEBUG nova.network.neutron [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.621 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.651 189283 INFO nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.682 189283 DEBUG nova.compute.manager [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.801 189283 DEBUG nova.compute.manager [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.807 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.808 189283 INFO nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Creating image(s)
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.809 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "/var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.809 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.810 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.823 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.880 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.881 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "193edf3941027c090c206b4992bbea3ae5563eb9" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.882 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "193edf3941027c090c206b4992bbea3ae5563eb9" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.894 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.951 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:01 compute-0 nova_compute[189279]: 2025-12-10 19:59:01.953 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9,backing_fmt=raw /var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.029 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9,backing_fmt=raw /var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk 1073741824" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.031 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "193edf3941027c090c206b4992bbea3ae5563eb9" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.032 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.090 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.099 189283 DEBUG nova.virt.disk.api [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Checking if we can resize image /var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.100 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.160 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.162 189283 DEBUG nova.virt.disk.api [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Cannot resize image /var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.162 189283 DEBUG nova.objects.instance [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'migration_context' on Instance uuid 51eb07cf-1168-4801-98e1-e0188e2c5f55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.190 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "/var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.190 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.191 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.204 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.262 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.263 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.264 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.275 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.334 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.335 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.545 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk.eph0 1073741824" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.546 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.547 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.605 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.607 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.607 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Ensure instance console log exists: /var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.609 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.609 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.610 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.714 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "1fbc523f-accf-4848-80b7-6d997e0c65bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.716 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.747 189283 DEBUG nova.compute.manager [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.827 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.835 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.844 189283 DEBUG nova.virt.hardware [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 19:59:02 compute-0 nova_compute[189279]: 2025-12-10 19:59:02.845 189283 INFO nova.compute.claims [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Claim successful on node compute-0.ctlplane.example.com
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.181 189283 DEBUG nova.compute.provider_tree [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.217 189283 DEBUG nova.scheduler.client.report [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.243 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.244 189283 DEBUG nova.compute.manager [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.294 189283 DEBUG nova.compute.manager [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.295 189283 DEBUG nova.network.neutron [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.325 189283 INFO nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.372 189283 DEBUG nova.compute.manager [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.469 189283 DEBUG nova.compute.manager [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.475 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.476 189283 INFO nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Creating image(s)
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.477 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "/var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.478 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.478 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.491 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.545 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.546 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "193edf3941027c090c206b4992bbea3ae5563eb9" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.547 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "193edf3941027c090c206b4992bbea3ae5563eb9" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.558 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.614 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.615 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9,backing_fmt=raw /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.669 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9,backing_fmt=raw /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk 1073741824" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.670 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "193edf3941027c090c206b4992bbea3ae5563eb9" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.671 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.729 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.739 189283 DEBUG nova.virt.disk.api [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Checking if we can resize image /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.740 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.796 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.797 189283 DEBUG nova.virt.disk.api [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Cannot resize image /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.798 189283 DEBUG nova.objects.instance [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'migration_context' on Instance uuid 1fbc523f-accf-4848-80b7-6d997e0c65bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.812 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "/var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.815 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.816 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.828 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.887 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.888 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.889 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.902 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.958 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:03 compute-0 nova_compute[189279]: 2025-12-10 19:59:03.959 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:04 compute-0 podman[241936]: 2025-12-10 19:59:04.089078785 +0000 UTC m=+0.074462235 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251210, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 10 19:59:04 compute-0 nova_compute[189279]: 2025-12-10 19:59:04.585 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 1073741824" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:04 compute-0 nova_compute[189279]: 2025-12-10 19:59:04.587 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:04 compute-0 nova_compute[189279]: 2025-12-10 19:59:04.587 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:04 compute-0 nova_compute[189279]: 2025-12-10 19:59:04.683 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:04 compute-0 nova_compute[189279]: 2025-12-10 19:59:04.684 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 19:59:04 compute-0 nova_compute[189279]: 2025-12-10 19:59:04.685 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Ensure instance console log exists: /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 19:59:04 compute-0 nova_compute[189279]: 2025-12-10 19:59:04.686 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:04 compute-0 nova_compute[189279]: 2025-12-10 19:59:04.686 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:04 compute-0 nova_compute[189279]: 2025-12-10 19:59:04.687 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:05.537 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.483 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.624 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.643 189283 DEBUG nova.network.neutron [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Successfully updated port: 8c3f3594-74a1-4927-9de3-1d09f5a52be0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.660 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "refresh_cache-51eb07cf-1168-4801-98e1-e0188e2c5f55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.660 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquired lock "refresh_cache-51eb07cf-1168-4801-98e1-e0188e2c5f55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.660 189283 DEBUG nova.network.neutron [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.727 189283 DEBUG nova.network.neutron [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Successfully updated port: b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.743 189283 DEBUG nova.compute.manager [req-79f42324-bae6-4c96-8f2f-6e0cbe9d1708 req-d1d2ebff-d186-42c1-915b-98455207ba2a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Received event network-changed-8c3f3594-74a1-4927-9de3-1d09f5a52be0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.744 189283 DEBUG nova.compute.manager [req-79f42324-bae6-4c96-8f2f-6e0cbe9d1708 req-d1d2ebff-d186-42c1-915b-98455207ba2a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Refreshing instance network info cache due to event network-changed-8c3f3594-74a1-4927-9de3-1d09f5a52be0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.744 189283 DEBUG oslo_concurrency.lockutils [req-79f42324-bae6-4c96-8f2f-6e0cbe9d1708 req-d1d2ebff-d186-42c1-915b-98455207ba2a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-51eb07cf-1168-4801-98e1-e0188e2c5f55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.755 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "refresh_cache-1fbc523f-accf-4848-80b7-6d997e0c65bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.755 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquired lock "refresh_cache-1fbc523f-accf-4848-80b7-6d997e0c65bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.755 189283 DEBUG nova.network.neutron [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.803 189283 DEBUG nova.compute.manager [req-13584b6a-b360-4289-82ab-616ae90f6a75 req-b33facf4-19dc-44d9-8045-f6ea213a213d 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received event network-changed-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.803 189283 DEBUG nova.compute.manager [req-13584b6a-b360-4289-82ab-616ae90f6a75 req-b33facf4-19dc-44d9-8045-f6ea213a213d 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Refreshing instance network info cache due to event network-changed-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.804 189283 DEBUG oslo_concurrency.lockutils [req-13584b6a-b360-4289-82ab-616ae90f6a75 req-b33facf4-19dc-44d9-8045-f6ea213a213d 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-1fbc523f-accf-4848-80b7-6d997e0c65bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.844 189283 DEBUG nova.network.neutron [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 19:59:06 compute-0 nova_compute[189279]: 2025-12-10 19:59:06.898 189283 DEBUG nova.network.neutron [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.830 189283 DEBUG nova.network.neutron [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Updating instance_info_cache with network_info: [{"id": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "address": "fa:16:3e:49:f1:12", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c3f3594-74", "ovs_interfaceid": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.857 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Releasing lock "refresh_cache-51eb07cf-1168-4801-98e1-e0188e2c5f55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.857 189283 DEBUG nova.compute.manager [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Instance network_info: |[{"id": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "address": "fa:16:3e:49:f1:12", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c3f3594-74", "ovs_interfaceid": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.858 189283 DEBUG oslo_concurrency.lockutils [req-79f42324-bae6-4c96-8f2f-6e0cbe9d1708 req-d1d2ebff-d186-42c1-915b-98455207ba2a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-51eb07cf-1168-4801-98e1-e0188e2c5f55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.858 189283 DEBUG nova.network.neutron [req-79f42324-bae6-4c96-8f2f-6e0cbe9d1708 req-d1d2ebff-d186-42c1-915b-98455207ba2a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Refreshing network info cache for port 8c3f3594-74a1-4927-9de3-1d09f5a52be0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.862 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Start _get_guest_xml network_info=[{"id": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "address": "fa:16:3e:49:f1:12", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c3f3594-74", "ovs_interfaceid": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-10T19:52:04Z,direct_url=<?>,disk_format='qcow2',id=06e6231d-0a77-4b09-acb3-e7faf5a777be,min_disk=0,min_ram=0,name='cirros',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-10T19:52:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 1, 'encryption_options': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.869 189283 WARNING nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.880 189283 DEBUG nova.virt.libvirt.host [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.880 189283 DEBUG nova.virt.libvirt.host [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.885 189283 DEBUG nova.virt.libvirt.host [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.885 189283 DEBUG nova.virt.libvirt.host [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.885 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.885 189283 DEBUG nova.virt.hardware [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T19:52:09Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='0fc2e5b1-b522-4c52-bdef-97db09e458e4',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-10T19:52:04Z,direct_url=<?>,disk_format='qcow2',id=06e6231d-0a77-4b09-acb3-e7faf5a777be,min_disk=0,min_ram=0,name='cirros',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-10T19:52:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.886 189283 DEBUG nova.virt.hardware [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.886 189283 DEBUG nova.virt.hardware [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.886 189283 DEBUG nova.virt.hardware [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.886 189283 DEBUG nova.virt.hardware [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.887 189283 DEBUG nova.virt.hardware [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.887 189283 DEBUG nova.virt.hardware [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.887 189283 DEBUG nova.virt.hardware [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.887 189283 DEBUG nova.virt.hardware [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.887 189283 DEBUG nova.virt.hardware [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.887 189283 DEBUG nova.virt.hardware [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.890 189283 DEBUG nova.virt.libvirt.vif [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T19:58:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-xzxegr5-od5vzkxsmgya-eshjaqtxgfue-vnf-eucagx5bjrqt',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xzxegr5-od5vzkxsmgya-eshjaqtxgfue-vnf-eucagx5bjrqt',id=3,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='9d7a68be-d216-4b06-b611-878d356c6d68'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-ha6jivrh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T19:59:01Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNjMyOTE3NDgzNDkxMTc4NTIxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM2MzI5MTc0ODM0OTExNzg1MjE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzYzMjkxNzQ4MzQ5MTE3ODUyMT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM2MzI5MTc0ODM0OTExNzg1MjE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNjMyOTE3NDgzNDkxMTc4NTIxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNjMyOTE3NDgzNDkxMTc4NTIxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Dec 10 19:59:07 compute-0 nova_compute[189279]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzYzMjkxNzQ4MzQ5MTE3ODUyMT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM2MzI5MTc0ODM0OTExNzg1MjE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zNjMyOTE3NDgzNDkxMTc4NTIxPT0tLQo=',user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=51eb07cf-1168-4801-98e1-e0188e2c5f55,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "address": "fa:16:3e:49:f1:12", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c3f3594-74", "ovs_interfaceid": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.891 189283 DEBUG nova.network.os_vif_util [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "address": "fa:16:3e:49:f1:12", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c3f3594-74", "ovs_interfaceid": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.891 189283 DEBUG nova.network.os_vif_util [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:f1:12,bridge_name='br-int',has_traffic_filtering=True,id=8c3f3594-74a1-4927-9de3-1d09f5a52be0,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8c3f3594-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.892 189283 DEBUG nova.objects.instance [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'pci_devices' on Instance uuid 51eb07cf-1168-4801-98e1-e0188e2c5f55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.910 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] End _get_guest_xml xml=<domain type="kvm">
Dec 10 19:59:07 compute-0 nova_compute[189279]:   <uuid>51eb07cf-1168-4801-98e1-e0188e2c5f55</uuid>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   <name>instance-00000003</name>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   <memory>524288</memory>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   <metadata>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <nova:name>vn-xzxegr5-od5vzkxsmgya-eshjaqtxgfue-vnf-eucagx5bjrqt</nova:name>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 19:59:07</nova:creationTime>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <nova:flavor name="m1.small">
Dec 10 19:59:07 compute-0 nova_compute[189279]:         <nova:memory>512</nova:memory>
Dec 10 19:59:07 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 19:59:07 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 19:59:07 compute-0 nova_compute[189279]:         <nova:ephemeral>1</nova:ephemeral>
Dec 10 19:59:07 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 19:59:07 compute-0 nova_compute[189279]:         <nova:user uuid="2143e69e49fd49db99c8737c973c1ea5">admin</nova:user>
Dec 10 19:59:07 compute-0 nova_compute[189279]:         <nova:project uuid="fe518ea62a94467e823b2b1046c57a2e">admin</nova:project>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="06e6231d-0a77-4b09-acb3-e7faf5a777be"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 19:59:07 compute-0 nova_compute[189279]:         <nova:port uuid="8c3f3594-74a1-4927-9de3-1d09f5a52be0">
Dec 10 19:59:07 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="192.168.0.167" ipVersion="4"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   </metadata>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <system>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <entry name="serial">51eb07cf-1168-4801-98e1-e0188e2c5f55</entry>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <entry name="uuid">51eb07cf-1168-4801-98e1-e0188e2c5f55</entry>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     </system>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   <os>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   </os>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   <features>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <apic/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   </features>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   </clock>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   </cpu>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   <devices>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk.eph0"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <target dev="vdb" bus="virtio"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk.config"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:49:f1:12"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <target dev="tap8c3f3594-74"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     </interface>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/console.log" append="off"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     </serial>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <video>
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     </video>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     </rng>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 19:59:07 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 19:59:07 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 19:59:07 compute-0 nova_compute[189279]:   </devices>
Dec 10 19:59:07 compute-0 nova_compute[189279]: </domain>
Dec 10 19:59:07 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.911 189283 DEBUG nova.compute.manager [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Preparing to wait for external event network-vif-plugged-8c3f3594-74a1-4927-9de3-1d09f5a52be0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.911 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.911 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.912 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.913 189283 DEBUG nova.virt.libvirt.vif [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T19:58:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-xzxegr5-od5vzkxsmgya-eshjaqtxgfue-vnf-eucagx5bjrqt',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xzxegr5-od5vzkxsmgya-eshjaqtxgfue-vnf-eucagx5bjrqt',id=3,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='9d7a68be-d216-4b06-b611-878d356c6d68'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-ha6jivrh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T19:59:01Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNjMyOTE3NDgzNDkxMTc4NTIxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM2MzI5MTc0ODM0OTExNzg1MjE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzYzMjkxNzQ4MzQ5MTE3ODUyMT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM2MzI5MTc0ODM0OTExNzg1MjE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNjMyOTE3NDgzNDkxMTc4NTIxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNjMyOTE3NDgzNDkxMTc4NTIxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Dec 10 19:59:07 compute-0 nova_compute[189279]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzYzMjkxNzQ4MzQ5MTE3ODUyMT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM2MzI5MTc0ODM0OTExNzg1MjE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zNjMyOTE3NDgzNDkxMTc4NTIxPT0tLQo=',user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=51eb07cf-1168-4801-98e1-e0188e2c5f55,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "address": "fa:16:3e:49:f1:12", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c3f3594-74", "ovs_interfaceid": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.913 189283 DEBUG nova.network.os_vif_util [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "address": "fa:16:3e:49:f1:12", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c3f3594-74", "ovs_interfaceid": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.914 189283 DEBUG nova.network.os_vif_util [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:f1:12,bridge_name='br-int',has_traffic_filtering=True,id=8c3f3594-74a1-4927-9de3-1d09f5a52be0,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8c3f3594-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.915 189283 DEBUG os_vif [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:f1:12,bridge_name='br-int',has_traffic_filtering=True,id=8c3f3594-74a1-4927-9de3-1d09f5a52be0,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8c3f3594-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.916 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.916 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.917 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.920 189283 DEBUG nova.network.neutron [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Updating instance_info_cache with network_info: [{"id": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "address": "fa:16:3e:85:db:85", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4b01034-4b", "ovs_interfaceid": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.922 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.922 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c3f3594-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.923 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8c3f3594-74, col_values=(('external_ids', {'iface-id': '8c3f3594-74a1-4927-9de3-1d09f5a52be0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:f1:12', 'vm-uuid': '51eb07cf-1168-4801-98e1-e0188e2c5f55'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.925 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:07 compute-0 NetworkManager[56238]: <info>  [1765396747.9261] manager: (tap8c3f3594-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.926 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.933 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.934 189283 INFO os_vif [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:f1:12,bridge_name='br-int',has_traffic_filtering=True,id=8c3f3594-74a1-4927-9de3-1d09f5a52be0,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8c3f3594-74')
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.949 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Releasing lock "refresh_cache-1fbc523f-accf-4848-80b7-6d997e0c65bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.949 189283 DEBUG nova.compute.manager [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Instance network_info: |[{"id": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "address": "fa:16:3e:85:db:85", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4b01034-4b", "ovs_interfaceid": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.950 189283 DEBUG oslo_concurrency.lockutils [req-13584b6a-b360-4289-82ab-616ae90f6a75 req-b33facf4-19dc-44d9-8045-f6ea213a213d 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-1fbc523f-accf-4848-80b7-6d997e0c65bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.950 189283 DEBUG nova.network.neutron [req-13584b6a-b360-4289-82ab-616ae90f6a75 req-b33facf4-19dc-44d9-8045-f6ea213a213d 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Refreshing network info cache for port b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.953 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Start _get_guest_xml network_info=[{"id": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "address": "fa:16:3e:85:db:85", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4b01034-4b", "ovs_interfaceid": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-10T19:52:04Z,direct_url=<?>,disk_format='qcow2',id=06e6231d-0a77-4b09-acb3-e7faf5a777be,min_disk=0,min_ram=0,name='cirros',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-10T19:52:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 1, 'encryption_options': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.963 189283 WARNING nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.971 189283 DEBUG nova.virt.libvirt.host [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.971 189283 DEBUG nova.virt.libvirt.host [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.992 189283 DEBUG nova.virt.libvirt.host [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.992 189283 DEBUG nova.virt.libvirt.host [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.992 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.993 189283 DEBUG nova.virt.hardware [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T19:52:09Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='0fc2e5b1-b522-4c52-bdef-97db09e458e4',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-10T19:52:04Z,direct_url=<?>,disk_format='qcow2',id=06e6231d-0a77-4b09-acb3-e7faf5a777be,min_disk=0,min_ram=0,name='cirros',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-10T19:52:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.993 189283 DEBUG nova.virt.hardware [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.993 189283 DEBUG nova.virt.hardware [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.993 189283 DEBUG nova.virt.hardware [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.993 189283 DEBUG nova.virt.hardware [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.994 189283 DEBUG nova.virt.hardware [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.994 189283 DEBUG nova.virt.hardware [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.994 189283 DEBUG nova.virt.hardware [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.994 189283 DEBUG nova.virt.hardware [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.994 189283 DEBUG nova.virt.hardware [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.994 189283 DEBUG nova.virt.hardware [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.997 189283 DEBUG nova.virt.libvirt.vif [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T19:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp',id=4,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='9d7a68be-d216-4b06-b611-878d356c6d68'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-eobt2g4q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T19:59:03Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MTYxOTUwNTEyOTY5NjY0MDIwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgxNjE5NTA1MTI5Njk2NjQwMjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODE2MTk1MDUxMjk2OTY2NDAyMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgxNjE5NTA1MTI5Njk2NjQwMjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MTYxOTUwNTEyOTY5NjY0MDIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MTYxOTUwNTEyOTY5NjY0MDIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Dec 10 19:59:07 compute-0 nova_compute[189279]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODE2MTk1MDUxMjk2OTY2NDAyMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgxNjE5NTA1MTI5Njk2NjQwMjA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MTYxOTUwNTEyOTY5NjY0MDIwPT0tLQo=',user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=1fbc523f-accf-4848-80b7-6d997e0c65bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "address": "fa:16:3e:85:db:85", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4b01034-4b", "ovs_interfaceid": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.998 189283 DEBUG nova.network.os_vif_util [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "address": "fa:16:3e:85:db:85", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4b01034-4b", "ovs_interfaceid": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.998 189283 DEBUG nova.network.os_vif_util [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:db:85,bridge_name='br-int',has_traffic_filtering=True,id=b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb4b01034-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 19:59:07 compute-0 nova_compute[189279]: 2025-12-10 19:59:07.999 189283 DEBUG nova.objects.instance [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'pci_devices' on Instance uuid 1fbc523f-accf-4848-80b7-6d997e0c65bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.012 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.012 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.012 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.013 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No VIF found with MAC fa:16:3e:49:f1:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.013 189283 INFO nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Using config drive
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.016 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] End _get_guest_xml xml=<domain type="kvm">
Dec 10 19:59:08 compute-0 nova_compute[189279]:   <uuid>1fbc523f-accf-4848-80b7-6d997e0c65bf</uuid>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   <name>instance-00000004</name>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   <memory>524288</memory>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   <metadata>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <nova:name>vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp</nova:name>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 19:59:07</nova:creationTime>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <nova:flavor name="m1.small">
Dec 10 19:59:08 compute-0 nova_compute[189279]:         <nova:memory>512</nova:memory>
Dec 10 19:59:08 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 19:59:08 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 19:59:08 compute-0 nova_compute[189279]:         <nova:ephemeral>1</nova:ephemeral>
Dec 10 19:59:08 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 19:59:08 compute-0 nova_compute[189279]:         <nova:user uuid="2143e69e49fd49db99c8737c973c1ea5">admin</nova:user>
Dec 10 19:59:08 compute-0 nova_compute[189279]:         <nova:project uuid="fe518ea62a94467e823b2b1046c57a2e">admin</nova:project>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="06e6231d-0a77-4b09-acb3-e7faf5a777be"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 19:59:08 compute-0 nova_compute[189279]:         <nova:port uuid="b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70">
Dec 10 19:59:08 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="192.168.0.7" ipVersion="4"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   </metadata>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <system>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <entry name="serial">1fbc523f-accf-4848-80b7-6d997e0c65bf</entry>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <entry name="uuid">1fbc523f-accf-4848-80b7-6d997e0c65bf</entry>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     </system>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   <os>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   </os>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   <features>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <apic/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   </features>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   </clock>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   </cpu>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   <devices>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <target dev="vdb" bus="virtio"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.config"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     </disk>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:85:db:85"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <target dev="tapb4b01034-4b"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     </interface>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/console.log" append="off"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     </serial>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <video>
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     </video>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     </rng>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 19:59:08 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 19:59:08 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 19:59:08 compute-0 nova_compute[189279]:   </devices>
Dec 10 19:59:08 compute-0 nova_compute[189279]: </domain>
Dec 10 19:59:08 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.016 189283 DEBUG nova.compute.manager [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Preparing to wait for external event network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.016 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.017 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.017 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.017 189283 DEBUG nova.virt.libvirt.vif [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T19:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp',id=4,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='9d7a68be-d216-4b06-b611-878d356c6d68'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-eobt2g4q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T19:59:03Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MTYxOTUwNTEyOTY5NjY0MDIwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgxNjE5NTA1MTI5Njk2NjQwMjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODE2MTk1MDUxMjk2OTY2NDAyMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgxNjE5NTA1MTI5Njk2NjQwMjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MTYxOTUwNTEyOTY5NjY0MDIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MTYxOTUwNTEyOTY5NjY0MDIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Dec 10 19:59:08 compute-0 nova_compute[189279]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODE2MTk1MDUxMjk2OTY2NDAyMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgxNjE5NTA1MTI5Njk2NjQwMjA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MTYxOTUwNTEyOTY5NjY0MDIwPT0tLQo=',user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=1fbc523f-accf-4848-80b7-6d997e0c65bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "address": "fa:16:3e:85:db:85", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4b01034-4b", "ovs_interfaceid": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.018 189283 DEBUG nova.network.os_vif_util [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "address": "fa:16:3e:85:db:85", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4b01034-4b", "ovs_interfaceid": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.018 189283 DEBUG nova.network.os_vif_util [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:db:85,bridge_name='br-int',has_traffic_filtering=True,id=b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb4b01034-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.018 189283 DEBUG os_vif [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:db:85,bridge_name='br-int',has_traffic_filtering=True,id=b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb4b01034-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.019 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.019 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.019 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.022 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.022 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4b01034-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.022 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb4b01034-4b, col_values=(('external_ids', {'iface-id': 'b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:db:85', 'vm-uuid': '1fbc523f-accf-4848-80b7-6d997e0c65bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.024 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:08 compute-0 NetworkManager[56238]: <info>  [1765396748.0259] manager: (tapb4b01034-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.026 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.036 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.040 189283 INFO os_vif [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:db:85,bridge_name='br-int',has_traffic_filtering=True,id=b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb4b01034-4b')
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.098 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.098 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.099 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.099 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No VIF found with MAC fa:16:3e:85:db:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 19:59:08 compute-0 nova_compute[189279]: 2025-12-10 19:59:08.099 189283 INFO nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Using config drive
Dec 10 19:59:08 compute-0 rsyslogd[236537]: message too long (8192) with configured size 8096, begin of message is: 2025-12-10 19:59:07.890 189283 DEBUG nova.virt.libvirt.vif [None req-099f9f93-1f [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 10 19:59:08 compute-0 rsyslogd[236537]: message too long (8192) with configured size 8096, begin of message is: 2025-12-10 19:59:07.913 189283 DEBUG nova.virt.libvirt.vif [None req-099f9f93-1f [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 10 19:59:08 compute-0 rsyslogd[236537]: message too long (8192) with configured size 8096, begin of message is: 2025-12-10 19:59:07.997 189283 DEBUG nova.virt.libvirt.vif [None req-ec2aa715-8d [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 10 19:59:08 compute-0 rsyslogd[236537]: message too long (8192) with configured size 8096, begin of message is: 2025-12-10 19:59:08.017 189283 DEBUG nova.virt.libvirt.vif [None req-ec2aa715-8d [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.295 189283 INFO nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Creating config drive at /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.config
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.303 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq14cljpq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.321 189283 INFO nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Creating config drive at /var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk.config
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.328 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4974er25 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.429 189283 DEBUG oslo_concurrency.processutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq14cljpq" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.461 189283 DEBUG oslo_concurrency.processutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4974er25" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:09 compute-0 kernel: tapb4b01034-4b: entered promiscuous mode
Dec 10 19:59:09 compute-0 NetworkManager[56238]: <info>  [1765396749.4998] manager: (tapb4b01034-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Dec 10 19:59:09 compute-0 ovn_controller[97701]: 2025-12-10T19:59:09Z|00040|binding|INFO|Claiming lport b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 for this chassis.
Dec 10 19:59:09 compute-0 ovn_controller[97701]: 2025-12-10T19:59:09Z|00041|binding|INFO|b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70: Claiming fa:16:3e:85:db:85 192.168.0.7
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.502 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.511 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:db:85 192.168.0.7'], port_security=['fa:16:3e:85:db:85 192.168.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-pjemjxzxegr5-dwjurop636bf-do4uo7veagb7-port-q3eae3svln7f', 'neutron:cidrs': '192.168.0.7/24', 'neutron:device_id': '1fbc523f-accf-4848-80b7-6d997e0c65bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-pjemjxzxegr5-dwjurop636bf-do4uo7veagb7-port-q3eae3svln7f', 'neutron:project_id': 'fe518ea62a94467e823b2b1046c57a2e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dbfca0ae-785e-46d9-973d-467188fc7f6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a76816-4d6e-46b1-bfa0-919a54b6e056, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.513 106564 INFO neutron.agent.ovn.metadata.agent [-] Port b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 in datapath e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 bound to our chassis
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.515 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e55a1ff5-f742-4bad-ae9c-2f6d4795fa29
Dec 10 19:59:09 compute-0 ovn_controller[97701]: 2025-12-10T19:59:09Z|00042|binding|INFO|Setting lport b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 ovn-installed in OVS
Dec 10 19:59:09 compute-0 ovn_controller[97701]: 2025-12-10T19:59:09Z|00043|binding|INFO|Setting lport b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 up in Southbound
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.528 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.534 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.534 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[46bb5a15-9424-4971-b451-902bcc0d9713]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:09 compute-0 kernel: tap8c3f3594-74: entered promiscuous mode
Dec 10 19:59:09 compute-0 NetworkManager[56238]: <info>  [1765396749.5445] manager: (tap8c3f3594-74): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.548 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:09 compute-0 ovn_controller[97701]: 2025-12-10T19:59:09Z|00044|binding|INFO|Claiming lport 8c3f3594-74a1-4927-9de3-1d09f5a52be0 for this chassis.
Dec 10 19:59:09 compute-0 ovn_controller[97701]: 2025-12-10T19:59:09Z|00045|binding|INFO|8c3f3594-74a1-4927-9de3-1d09f5a52be0: Claiming fa:16:3e:49:f1:12 192.168.0.167
Dec 10 19:59:09 compute-0 systemd-udevd[242003]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 19:59:09 compute-0 systemd-udevd[242006]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.556 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:f1:12 192.168.0.167'], port_security=['fa:16:3e:49:f1:12 192.168.0.167'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-pjemjxzxegr5-od5vzkxsmgya-eshjaqtxgfue-port-gidketh4l52q', 'neutron:cidrs': '192.168.0.167/24', 'neutron:device_id': '51eb07cf-1168-4801-98e1-e0188e2c5f55', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-pjemjxzxegr5-od5vzkxsmgya-eshjaqtxgfue-port-gidketh4l52q', 'neutron:project_id': 'fe518ea62a94467e823b2b1046c57a2e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dbfca0ae-785e-46d9-973d-467188fc7f6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a76816-4d6e-46b1-bfa0-919a54b6e056, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=8c3f3594-74a1-4927-9de3-1d09f5a52be0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 19:59:09 compute-0 ovn_controller[97701]: 2025-12-10T19:59:09Z|00046|binding|INFO|Setting lport 8c3f3594-74a1-4927-9de3-1d09f5a52be0 ovn-installed in OVS
Dec 10 19:59:09 compute-0 ovn_controller[97701]: 2025-12-10T19:59:09Z|00047|binding|INFO|Setting lport 8c3f3594-74a1-4927-9de3-1d09f5a52be0 up in Southbound
Dec 10 19:59:09 compute-0 systemd-machined[155642]: New machine qemu-3-instance-00000004.
Dec 10 19:59:09 compute-0 NetworkManager[56238]: <info>  [1765396749.5704] device (tap8c3f3594-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 19:59:09 compute-0 NetworkManager[56238]: <info>  [1765396749.5713] device (tap8c3f3594-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.571 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.573 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[7a233bd4-66ab-4072-8b11-552c77cbb89d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:09 compute-0 NetworkManager[56238]: <info>  [1765396749.5746] device (tapb4b01034-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 19:59:09 compute-0 NetworkManager[56238]: <info>  [1765396749.5751] device (tapb4b01034-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 19:59:09 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.576 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f0628d-bf3a-4a4d-9442-b6cc736146c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:09 compute-0 systemd-machined[155642]: New machine qemu-4-instance-00000003.
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.603 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[65651716-45e1-422d-b746-425f9a9b7274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:09 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000003.
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.622 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[6577f71b-e8f3-4a8b-a611-16851d51f3ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape55a1ff5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:f6:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 372629, 'reachable_time': 33508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242016, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.650 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[724a55b6-2fb6-41d7-9014-929ba8a3e427]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372645, 'tstamp': 372645}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242022, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372649, 'tstamp': 372649}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242022, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.652 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape55a1ff5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.654 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.656 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape55a1ff5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.656 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.657 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape55a1ff5-f0, col_values=(('external_ids', {'iface-id': 'f70c9140-d0bb-473b-94ef-0336fe52cbb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.657 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.659 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 8c3f3594-74a1-4927-9de3-1d09f5a52be0 in datapath e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 unbound from our chassis
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.661 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e55a1ff5-f742-4bad-ae9c-2f6d4795fa29
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.684 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e8492aa3-675c-46de-ba83-c1379869ea86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.720 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[4e0f2917-b000-4710-9eb0-6494e8f3bd59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.723 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[7be3ccec-87f4-448b-85bc-e00e9a340a8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.748 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e0471c-63fe-4c28-b592-06eb7d5d570b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.767 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[59d203b1-90df-43ef-a39b-3057de99f796]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape55a1ff5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:f6:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 372629, 'reachable_time': 33508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242035, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.785 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[8eee6289-4872-4602-b058-9dc01050967c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372645, 'tstamp': 372645}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242036, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372649, 'tstamp': 372649}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242036, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.787 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape55a1ff5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.789 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.791 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.791 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape55a1ff5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.791 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.792 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape55a1ff5-f0, col_values=(('external_ids', {'iface-id': 'f70c9140-d0bb-473b-94ef-0336fe52cbb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:09.792 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.929 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396749.928347, 51eb07cf-1168-4801-98e1-e0188e2c5f55 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.929 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] VM Started (Lifecycle Event)
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.947 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.955 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396749.9285476, 51eb07cf-1168-4801-98e1-e0188e2c5f55 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.955 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] VM Paused (Lifecycle Event)
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.974 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.980 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.997 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.998 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396749.941048, 1fbc523f-accf-4848-80b7-6d997e0c65bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 19:59:09 compute-0 nova_compute[189279]: 2025-12-10 19:59:09.998 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] VM Started (Lifecycle Event)
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.015 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.024 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396749.9411495, 1fbc523f-accf-4848-80b7-6d997e0c65bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.024 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] VM Paused (Lifecycle Event)
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.045 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.054 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.075 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 19:59:10 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 10 19:59:10 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.830 189283 DEBUG nova.compute.manager [req-408f8c39-eadd-4958-96db-15d488e5ed63 req-8bb1c909-bb9d-4218-82aa-320f5def6ea9 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received event network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.831 189283 DEBUG oslo_concurrency.lockutils [req-408f8c39-eadd-4958-96db-15d488e5ed63 req-8bb1c909-bb9d-4218-82aa-320f5def6ea9 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.831 189283 DEBUG oslo_concurrency.lockutils [req-408f8c39-eadd-4958-96db-15d488e5ed63 req-8bb1c909-bb9d-4218-82aa-320f5def6ea9 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.832 189283 DEBUG oslo_concurrency.lockutils [req-408f8c39-eadd-4958-96db-15d488e5ed63 req-8bb1c909-bb9d-4218-82aa-320f5def6ea9 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.832 189283 DEBUG nova.compute.manager [req-408f8c39-eadd-4958-96db-15d488e5ed63 req-8bb1c909-bb9d-4218-82aa-320f5def6ea9 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Processing event network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.832 189283 DEBUG nova.compute.manager [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.837 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396750.8373704, 1fbc523f-accf-4848-80b7-6d997e0c65bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.839 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] VM Resumed (Lifecycle Event)
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.842 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.850 189283 INFO nova.virt.libvirt.driver [-] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Instance spawned successfully.
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.851 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.875 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.895 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.899 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.899 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.900 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.900 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.901 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.901 189283 DEBUG nova.virt.libvirt.driver [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.938 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.975 189283 INFO nova.compute.manager [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Took 7.50 seconds to spawn the instance on the hypervisor.
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.976 189283 DEBUG nova.compute.manager [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.984 189283 DEBUG nova.network.neutron [req-13584b6a-b360-4289-82ab-616ae90f6a75 req-b33facf4-19dc-44d9-8045-f6ea213a213d 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Updated VIF entry in instance network info cache for port b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.985 189283 DEBUG nova.network.neutron [req-13584b6a-b360-4289-82ab-616ae90f6a75 req-b33facf4-19dc-44d9-8045-f6ea213a213d 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Updating instance_info_cache with network_info: [{"id": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "address": "fa:16:3e:85:db:85", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4b01034-4b", "ovs_interfaceid": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.994 189283 DEBUG nova.network.neutron [req-79f42324-bae6-4c96-8f2f-6e0cbe9d1708 req-d1d2ebff-d186-42c1-915b-98455207ba2a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Updated VIF entry in instance network info cache for port 8c3f3594-74a1-4927-9de3-1d09f5a52be0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 19:59:10 compute-0 nova_compute[189279]: 2025-12-10 19:59:10.994 189283 DEBUG nova.network.neutron [req-79f42324-bae6-4c96-8f2f-6e0cbe9d1708 req-d1d2ebff-d186-42c1-915b-98455207ba2a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Updating instance_info_cache with network_info: [{"id": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "address": "fa:16:3e:49:f1:12", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c3f3594-74", "ovs_interfaceid": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:59:11 compute-0 nova_compute[189279]: 2025-12-10 19:59:11.026 189283 DEBUG oslo_concurrency.lockutils [req-13584b6a-b360-4289-82ab-616ae90f6a75 req-b33facf4-19dc-44d9-8045-f6ea213a213d 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-1fbc523f-accf-4848-80b7-6d997e0c65bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:59:11 compute-0 nova_compute[189279]: 2025-12-10 19:59:11.045 189283 INFO nova.compute.manager [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Took 8.24 seconds to build instance.
Dec 10 19:59:11 compute-0 nova_compute[189279]: 2025-12-10 19:59:11.063 189283 DEBUG oslo_concurrency.lockutils [req-79f42324-bae6-4c96-8f2f-6e0cbe9d1708 req-d1d2ebff-d186-42c1-915b-98455207ba2a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-51eb07cf-1168-4801-98e1-e0188e2c5f55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:59:11 compute-0 nova_compute[189279]: 2025-12-10 19:59:11.065 189283 DEBUG oslo_concurrency.lockutils [None req-ec2aa715-8dd5-4d90-a508-c80492faeb18 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:11 compute-0 nova_compute[189279]: 2025-12-10 19:59:11.486 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:12 compute-0 podman[242071]: 2025-12-10 19:59:12.130510334 +0000 UTC m=+0.100414411 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.buildah.version=1.33.7)
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.914 189283 DEBUG nova.compute.manager [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received event network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.914 189283 DEBUG oslo_concurrency.lockutils [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.915 189283 DEBUG oslo_concurrency.lockutils [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.915 189283 DEBUG oslo_concurrency.lockutils [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.915 189283 DEBUG nova.compute.manager [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] No waiting events found dispatching network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.915 189283 WARNING nova.compute.manager [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received unexpected event network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 for instance with vm_state active and task_state None.
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.916 189283 DEBUG nova.compute.manager [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Received event network-vif-plugged-8c3f3594-74a1-4927-9de3-1d09f5a52be0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.916 189283 DEBUG oslo_concurrency.lockutils [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.916 189283 DEBUG oslo_concurrency.lockutils [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.917 189283 DEBUG oslo_concurrency.lockutils [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.917 189283 DEBUG nova.compute.manager [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Processing event network-vif-plugged-8c3f3594-74a1-4927-9de3-1d09f5a52be0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.917 189283 DEBUG nova.compute.manager [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Received event network-vif-plugged-8c3f3594-74a1-4927-9de3-1d09f5a52be0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.917 189283 DEBUG oslo_concurrency.lockutils [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.918 189283 DEBUG oslo_concurrency.lockutils [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.918 189283 DEBUG oslo_concurrency.lockutils [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.918 189283 DEBUG nova.compute.manager [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] No waiting events found dispatching network-vif-plugged-8c3f3594-74a1-4927-9de3-1d09f5a52be0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.918 189283 WARNING nova.compute.manager [req-19702324-d573-440c-9d09-c89c55b6d216 req-b6e9d77c-4570-465b-b455-04e39c64c955 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Received unexpected event network-vif-plugged-8c3f3594-74a1-4927-9de3-1d09f5a52be0 for instance with vm_state building and task_state spawning.
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.919 189283 DEBUG nova.compute.manager [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.925 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396752.9239194, 51eb07cf-1168-4801-98e1-e0188e2c5f55 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.925 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] VM Resumed (Lifecycle Event)
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.928 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.933 189283 INFO nova.virt.libvirt.driver [-] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Instance spawned successfully.
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.934 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.948 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.959 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.962 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.962 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.962 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.963 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.963 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.963 189283 DEBUG nova.virt.libvirt.driver [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 19:59:12 compute-0 nova_compute[189279]: 2025-12-10 19:59:12.986 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 19:59:13 compute-0 nova_compute[189279]: 2025-12-10 19:59:13.013 189283 INFO nova.compute.manager [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Took 11.21 seconds to spawn the instance on the hypervisor.
Dec 10 19:59:13 compute-0 nova_compute[189279]: 2025-12-10 19:59:13.013 189283 DEBUG nova.compute.manager [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:59:13 compute-0 nova_compute[189279]: 2025-12-10 19:59:13.026 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:13 compute-0 nova_compute[189279]: 2025-12-10 19:59:13.075 189283 INFO nova.compute.manager [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Took 11.75 seconds to build instance.
Dec 10 19:59:13 compute-0 nova_compute[189279]: 2025-12-10 19:59:13.091 189283 DEBUG oslo_concurrency.lockutils [None req-099f9f93-1f32-466a-8e23-20324f65a62e 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:14 compute-0 podman[242091]: 2025-12-10 19:59:14.162963221 +0000 UTC m=+0.126574035 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 19:59:16 compute-0 nova_compute[189279]: 2025-12-10 19:59:16.489 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:18 compute-0 nova_compute[189279]: 2025-12-10 19:59:18.029 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:18 compute-0 nova_compute[189279]: 2025-12-10 19:59:18.841 189283 DEBUG oslo_concurrency.lockutils [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "51eb07cf-1168-4801-98e1-e0188e2c5f55" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:18 compute-0 nova_compute[189279]: 2025-12-10 19:59:18.842 189283 DEBUG oslo_concurrency.lockutils [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:18 compute-0 nova_compute[189279]: 2025-12-10 19:59:18.843 189283 DEBUG oslo_concurrency.lockutils [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:18 compute-0 nova_compute[189279]: 2025-12-10 19:59:18.843 189283 DEBUG oslo_concurrency.lockutils [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:18 compute-0 nova_compute[189279]: 2025-12-10 19:59:18.844 189283 DEBUG oslo_concurrency.lockutils [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:18 compute-0 nova_compute[189279]: 2025-12-10 19:59:18.846 189283 INFO nova.compute.manager [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Terminating instance
Dec 10 19:59:18 compute-0 nova_compute[189279]: 2025-12-10 19:59:18.855 189283 DEBUG nova.compute.manager [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 19:59:18 compute-0 kernel: tap8c3f3594-74 (unregistering): left promiscuous mode
Dec 10 19:59:18 compute-0 NetworkManager[56238]: <info>  [1765396758.8906] device (tap8c3f3594-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 19:59:18 compute-0 nova_compute[189279]: 2025-12-10 19:59:18.901 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:18 compute-0 ovn_controller[97701]: 2025-12-10T19:59:18Z|00048|binding|INFO|Releasing lport 8c3f3594-74a1-4927-9de3-1d09f5a52be0 from this chassis (sb_readonly=0)
Dec 10 19:59:18 compute-0 ovn_controller[97701]: 2025-12-10T19:59:18Z|00049|binding|INFO|Setting lport 8c3f3594-74a1-4927-9de3-1d09f5a52be0 down in Southbound
Dec 10 19:59:18 compute-0 ovn_controller[97701]: 2025-12-10T19:59:18Z|00050|binding|INFO|Removing iface tap8c3f3594-74 ovn-installed in OVS
Dec 10 19:59:18 compute-0 nova_compute[189279]: 2025-12-10 19:59:18.908 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:18.916 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:f1:12 192.168.0.167'], port_security=['fa:16:3e:49:f1:12 192.168.0.167'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-pjemjxzxegr5-od5vzkxsmgya-eshjaqtxgfue-port-gidketh4l52q', 'neutron:cidrs': '192.168.0.167/24', 'neutron:device_id': '51eb07cf-1168-4801-98e1-e0188e2c5f55', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-pjemjxzxegr5-od5vzkxsmgya-eshjaqtxgfue-port-gidketh4l52q', 'neutron:project_id': 'fe518ea62a94467e823b2b1046c57a2e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbfca0ae-785e-46d9-973d-467188fc7f6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a76816-4d6e-46b1-bfa0-919a54b6e056, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=8c3f3594-74a1-4927-9de3-1d09f5a52be0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 19:59:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:18.920 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 8c3f3594-74a1-4927-9de3-1d09f5a52be0 in datapath e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 unbound from our chassis
Dec 10 19:59:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:18.924 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e55a1ff5-f742-4bad-ae9c-2f6d4795fa29
Dec 10 19:59:18 compute-0 nova_compute[189279]: 2025-12-10 19:59:18.925 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:18 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec 10 19:59:18 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000003.scope: Consumed 6.371s CPU time.
Dec 10 19:59:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:18.950 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ad8cd8-b701-4bb2-8cc1-66c9218d0153]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:18 compute-0 systemd-machined[155642]: Machine qemu-4-instance-00000003 terminated.
Dec 10 19:59:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:18.995 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb0aedb-61a7-4d39-a32b-f39e2c457e6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:19.000 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3ceead-abb1-48a0-bf53-3ef23b7e05b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:19 compute-0 podman[242116]: 2025-12-10 19:59:19.017558907 +0000 UTC m=+0.089527369 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:59:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:19.037 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[ae11cd43-ffca-4ee7-8853-f94373446d25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:19 compute-0 podman[242118]: 2025-12-10 19:59:19.045022545 +0000 UTC m=+0.113513573 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release=1214.1726694543, vendor=Red Hat, Inc.)
Dec 10 19:59:19 compute-0 podman[242117]: 2025-12-10 19:59:19.054388967 +0000 UTC m=+0.131433295 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm)
Dec 10 19:59:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:19.062 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9f05ef-4076-4bde-8d9f-4d307c3868aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape55a1ff5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:f6:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 372629, 'reachable_time': 33508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242180, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:19.084 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[8c823ea5-8133-4261-baf2-ecd34c4f3562]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372645, 'tstamp': 372645}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242183, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372649, 'tstamp': 372649}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242183, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 19:59:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:19.086 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape55a1ff5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.088 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.099 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:19.101 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape55a1ff5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:19.103 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 19:59:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:19.104 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape55a1ff5-f0, col_values=(('external_ids', {'iface-id': 'f70c9140-d0bb-473b-94ef-0336fe52cbb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:19.105 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.154 189283 INFO nova.virt.libvirt.driver [-] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Instance destroyed successfully.
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.155 189283 DEBUG nova.objects.instance [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'resources' on Instance uuid 51eb07cf-1168-4801-98e1-e0188e2c5f55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.173 189283 DEBUG nova.virt.libvirt.vif [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T19:58:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-xzxegr5-od5vzkxsmgya-eshjaqtxgfue-vnf-eucagx5bjrqt',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xzxegr5-od5vzkxsmgya-eshjaqtxgfue-vnf-eucagx5bjrqt',id=3,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-10T19:59:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='9d7a68be-d216-4b06-b611-878d356c6d68'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-ha6jivrh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T19:59:13Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNjMyOTE3NDgzNDkxMTc4NTIxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM2MzI5MTc0ODM0OTExNzg1MjE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzYzMjkxNzQ4MzQ5MTE3ODUyMT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM2MzI5MTc0ODM0OTExNzg1MjE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNjMyOTE3NDgzNDkxMTc4NTIxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNjMyOTE3NDgzNDkxMTc4NTIxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Dec 10 19:59:19 compute-0 nova_compute[189279]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzYzMjkxNzQ4MzQ5MTE3ODUyMT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM2MzI5MTc0ODM0OTExNzg1MjE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zNjMyOTE3NDgzNDkxMTc4NTIxPT0tLQo=',user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=51eb07cf-1168-4801-98e1-e0188e2c5f55,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "address": "fa:16:3e:49:f1:12", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c3f3594-74", "ovs_interfaceid": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.173 189283 DEBUG nova.network.os_vif_util [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "address": "fa:16:3e:49:f1:12", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c3f3594-74", "ovs_interfaceid": "8c3f3594-74a1-4927-9de3-1d09f5a52be0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.174 189283 DEBUG nova.network.os_vif_util [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:f1:12,bridge_name='br-int',has_traffic_filtering=True,id=8c3f3594-74a1-4927-9de3-1d09f5a52be0,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8c3f3594-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.174 189283 DEBUG os_vif [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:f1:12,bridge_name='br-int',has_traffic_filtering=True,id=8c3f3594-74a1-4927-9de3-1d09f5a52be0,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8c3f3594-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.176 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.176 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c3f3594-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.178 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.179 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.181 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.184 189283 INFO os_vif [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:f1:12,bridge_name='br-int',has_traffic_filtering=True,id=8c3f3594-74a1-4927-9de3-1d09f5a52be0,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8c3f3594-74')
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.184 189283 INFO nova.virt.libvirt.driver [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Deleting instance files /var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55_del
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.185 189283 INFO nova.virt.libvirt.driver [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Deletion of /var/lib/nova/instances/51eb07cf-1168-4801-98e1-e0188e2c5f55_del complete
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.268 189283 DEBUG nova.virt.libvirt.host [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.269 189283 INFO nova.virt.libvirt.host [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] UEFI support detected
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.271 189283 INFO nova.compute.manager [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Took 0.42 seconds to destroy the instance on the hypervisor.
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.271 189283 DEBUG oslo.service.loopingcall [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.272 189283 DEBUG nova.compute.manager [-] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.272 189283 DEBUG nova.network.neutron [-] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.445 189283 DEBUG nova.compute.manager [req-5c8dbdb8-a375-4a10-a400-8ec03ad62069 req-ae675fdb-656c-4263-af00-2a0e87297f22 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Received event network-vif-unplugged-8c3f3594-74a1-4927-9de3-1d09f5a52be0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.446 189283 DEBUG oslo_concurrency.lockutils [req-5c8dbdb8-a375-4a10-a400-8ec03ad62069 req-ae675fdb-656c-4263-af00-2a0e87297f22 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.446 189283 DEBUG oslo_concurrency.lockutils [req-5c8dbdb8-a375-4a10-a400-8ec03ad62069 req-ae675fdb-656c-4263-af00-2a0e87297f22 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.446 189283 DEBUG oslo_concurrency.lockutils [req-5c8dbdb8-a375-4a10-a400-8ec03ad62069 req-ae675fdb-656c-4263-af00-2a0e87297f22 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.446 189283 DEBUG nova.compute.manager [req-5c8dbdb8-a375-4a10-a400-8ec03ad62069 req-ae675fdb-656c-4263-af00-2a0e87297f22 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] No waiting events found dispatching network-vif-unplugged-8c3f3594-74a1-4927-9de3-1d09f5a52be0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 19:59:19 compute-0 nova_compute[189279]: 2025-12-10 19:59:19.446 189283 DEBUG nova.compute.manager [req-5c8dbdb8-a375-4a10-a400-8ec03ad62069 req-ae675fdb-656c-4263-af00-2a0e87297f22 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Received event network-vif-unplugged-8c3f3594-74a1-4927-9de3-1d09f5a52be0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 10 19:59:19 compute-0 rsyslogd[236537]: message too long (8192) with configured size 8096, begin of message is: 2025-12-10 19:59:19.173 189283 DEBUG nova.virt.libvirt.vif [None req-cc5aec28-d0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 10 19:59:20 compute-0 nova_compute[189279]: 2025-12-10 19:59:20.716 189283 DEBUG nova.network.neutron [-] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:59:20 compute-0 nova_compute[189279]: 2025-12-10 19:59:20.731 189283 INFO nova.compute.manager [-] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Took 1.46 seconds to deallocate network for instance.
Dec 10 19:59:20 compute-0 nova_compute[189279]: 2025-12-10 19:59:20.763 189283 DEBUG oslo_concurrency.lockutils [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:20 compute-0 nova_compute[189279]: 2025-12-10 19:59:20.764 189283 DEBUG oslo_concurrency.lockutils [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:20 compute-0 nova_compute[189279]: 2025-12-10 19:59:20.887 189283 DEBUG nova.compute.provider_tree [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:59:20 compute-0 nova_compute[189279]: 2025-12-10 19:59:20.900 189283 DEBUG nova.scheduler.client.report [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:59:20 compute-0 nova_compute[189279]: 2025-12-10 19:59:20.919 189283 DEBUG oslo_concurrency.lockutils [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:20 compute-0 nova_compute[189279]: 2025-12-10 19:59:20.941 189283 INFO nova.scheduler.client.report [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Deleted allocations for instance 51eb07cf-1168-4801-98e1-e0188e2c5f55
Dec 10 19:59:21 compute-0 nova_compute[189279]: 2025-12-10 19:59:21.002 189283 DEBUG oslo_concurrency.lockutils [None req-cc5aec28-d044-41ca-93af-9c692d1e13fa 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:21 compute-0 nova_compute[189279]: 2025-12-10 19:59:21.491 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:21 compute-0 nova_compute[189279]: 2025-12-10 19:59:21.506 189283 DEBUG nova.compute.manager [req-d1d7a755-8aae-454a-99c5-c4da0680b48b req-684f56e5-9347-4941-abed-4fb7990b8e13 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Received event network-vif-plugged-8c3f3594-74a1-4927-9de3-1d09f5a52be0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:59:21 compute-0 nova_compute[189279]: 2025-12-10 19:59:21.507 189283 DEBUG oslo_concurrency.lockutils [req-d1d7a755-8aae-454a-99c5-c4da0680b48b req-684f56e5-9347-4941-abed-4fb7990b8e13 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:21 compute-0 nova_compute[189279]: 2025-12-10 19:59:21.507 189283 DEBUG oslo_concurrency.lockutils [req-d1d7a755-8aae-454a-99c5-c4da0680b48b req-684f56e5-9347-4941-abed-4fb7990b8e13 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:21 compute-0 nova_compute[189279]: 2025-12-10 19:59:21.507 189283 DEBUG oslo_concurrency.lockutils [req-d1d7a755-8aae-454a-99c5-c4da0680b48b req-684f56e5-9347-4941-abed-4fb7990b8e13 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "51eb07cf-1168-4801-98e1-e0188e2c5f55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:21 compute-0 nova_compute[189279]: 2025-12-10 19:59:21.507 189283 DEBUG nova.compute.manager [req-d1d7a755-8aae-454a-99c5-c4da0680b48b req-684f56e5-9347-4941-abed-4fb7990b8e13 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] No waiting events found dispatching network-vif-plugged-8c3f3594-74a1-4927-9de3-1d09f5a52be0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 19:59:21 compute-0 nova_compute[189279]: 2025-12-10 19:59:21.508 189283 WARNING nova.compute.manager [req-d1d7a755-8aae-454a-99c5-c4da0680b48b req-684f56e5-9347-4941-abed-4fb7990b8e13 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Received unexpected event network-vif-plugged-8c3f3594-74a1-4927-9de3-1d09f5a52be0 for instance with vm_state deleted and task_state None.
Dec 10 19:59:21 compute-0 nova_compute[189279]: 2025-12-10 19:59:21.508 189283 DEBUG nova.compute.manager [req-d1d7a755-8aae-454a-99c5-c4da0680b48b req-684f56e5-9347-4941-abed-4fb7990b8e13 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Received event network-changed-8c3f3594-74a1-4927-9de3-1d09f5a52be0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 19:59:21 compute-0 nova_compute[189279]: 2025-12-10 19:59:21.508 189283 DEBUG nova.compute.manager [req-d1d7a755-8aae-454a-99c5-c4da0680b48b req-684f56e5-9347-4941-abed-4fb7990b8e13 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Refreshing instance network info cache due to event network-changed-8c3f3594-74a1-4927-9de3-1d09f5a52be0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 19:59:21 compute-0 nova_compute[189279]: 2025-12-10 19:59:21.508 189283 DEBUG oslo_concurrency.lockutils [req-d1d7a755-8aae-454a-99c5-c4da0680b48b req-684f56e5-9347-4941-abed-4fb7990b8e13 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-51eb07cf-1168-4801-98e1-e0188e2c5f55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:59:21 compute-0 nova_compute[189279]: 2025-12-10 19:59:21.508 189283 DEBUG oslo_concurrency.lockutils [req-d1d7a755-8aae-454a-99c5-c4da0680b48b req-684f56e5-9347-4941-abed-4fb7990b8e13 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-51eb07cf-1168-4801-98e1-e0188e2c5f55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:59:21 compute-0 nova_compute[189279]: 2025-12-10 19:59:21.509 189283 DEBUG nova.network.neutron [req-d1d7a755-8aae-454a-99c5-c4da0680b48b req-684f56e5-9347-4941-abed-4fb7990b8e13 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Refreshing network info cache for port 8c3f3594-74a1-4927-9de3-1d09f5a52be0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 19:59:21 compute-0 nova_compute[189279]: 2025-12-10 19:59:21.613 189283 DEBUG nova.network.neutron [req-d1d7a755-8aae-454a-99c5-c4da0680b48b req-684f56e5-9347-4941-abed-4fb7990b8e13 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 19:59:22 compute-0 nova_compute[189279]: 2025-12-10 19:59:22.009 189283 DEBUG nova.network.neutron [req-d1d7a755-8aae-454a-99c5-c4da0680b48b req-684f56e5-9347-4941-abed-4fb7990b8e13 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Dec 10 19:59:22 compute-0 nova_compute[189279]: 2025-12-10 19:59:22.009 189283 DEBUG oslo_concurrency.lockutils [req-d1d7a755-8aae-454a-99c5-c4da0680b48b req-684f56e5-9347-4941-abed-4fb7990b8e13 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-51eb07cf-1168-4801-98e1-e0188e2c5f55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:59:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:23.374 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:23.375 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 19:59:23.376 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:24 compute-0 nova_compute[189279]: 2025-12-10 19:59:24.179 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:25 compute-0 podman[242206]: 2025-12-10 19:59:25.10322727 +0000 UTC m=+0.077898066 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 10 19:59:25 compute-0 podman[242207]: 2025-12-10 19:59:25.121012919 +0000 UTC m=+0.093477036 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 19:59:26 compute-0 nova_compute[189279]: 2025-12-10 19:59:26.495 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:29 compute-0 podman[242248]: 2025-12-10 19:59:29.154493452 +0000 UTC m=+0.127479089 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 10 19:59:29 compute-0 nova_compute[189279]: 2025-12-10 19:59:29.182 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:29 compute-0 podman[203484]: time="2025-12-10T19:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:59:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 19:59:29 compute-0 podman[203484]: @ - - [10/Dec/2025:19:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4792 "" "Go-http-client/1.1"
Dec 10 19:59:31 compute-0 openstack_network_exporter[205632]: ERROR   19:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:59:31 compute-0 openstack_network_exporter[205632]: ERROR   19:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 19:59:31 compute-0 openstack_network_exporter[205632]: ERROR   19:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 19:59:31 compute-0 openstack_network_exporter[205632]: ERROR   19:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 19:59:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:59:31 compute-0 openstack_network_exporter[205632]: ERROR   19:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 19:59:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 19:59:31 compute-0 nova_compute[189279]: 2025-12-10 19:59:31.498 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:34 compute-0 nova_compute[189279]: 2025-12-10 19:59:34.153 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765396759.1518703, 51eb07cf-1168-4801-98e1-e0188e2c5f55 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 19:59:34 compute-0 nova_compute[189279]: 2025-12-10 19:59:34.154 189283 INFO nova.compute.manager [-] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] VM Stopped (Lifecycle Event)
Dec 10 19:59:34 compute-0 nova_compute[189279]: 2025-12-10 19:59:34.181 189283 DEBUG nova.compute.manager [None req-76df4691-1ce8-40f5-85ca-bb1b1536d55a - - - - - -] [instance: 51eb07cf-1168-4801-98e1-e0188e2c5f55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 19:59:34 compute-0 nova_compute[189279]: 2025-12-10 19:59:34.184 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:35 compute-0 podman[242275]: 2025-12-10 19:59:35.136184019 +0000 UTC m=+0.112811195 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251210, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 10 19:59:36 compute-0 nova_compute[189279]: 2025-12-10 19:59:36.502 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:39 compute-0 nova_compute[189279]: 2025-12-10 19:59:39.188 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:40 compute-0 ovn_controller[97701]: 2025-12-10T19:59:40Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:db:85 192.168.0.7
Dec 10 19:59:40 compute-0 ovn_controller[97701]: 2025-12-10T19:59:40Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:db:85 192.168.0.7
Dec 10 19:59:41 compute-0 nova_compute[189279]: 2025-12-10 19:59:41.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:59:41 compute-0 nova_compute[189279]: 2025-12-10 19:59:41.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 10 19:59:41 compute-0 nova_compute[189279]: 2025-12-10 19:59:41.504 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 10 19:59:41 compute-0 nova_compute[189279]: 2025-12-10 19:59:41.507 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.173 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.174 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.180 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.180 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.183 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.183 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.185 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.185 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.185 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '12986b74-7b15-4ff4-9019-081950660d4b', 'name': 'test_0', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa4248e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.191 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1fbc523f-accf-4848-80b7-6d997e0c65bf from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 10 19:59:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:42.193 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1fbc523f-accf-4848-80b7-6d997e0c65bf -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a6b7809c80638f6e016296d2f243706fded356213cefb5a3f70c31b120afa2c9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 10 19:59:43 compute-0 podman[242311]: 2025-12-10 19:59:43.111527443 +0000 UTC m=+0.086467587 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm)
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.351 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1958 Content-Type: application/json Date: Wed, 10 Dec 2025 19:59:42 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-af3587e1-5b3d-4403-a0f3-fa4e3e2d960e x-openstack-request-id: req-af3587e1-5b3d-4403-a0f3-fa4e3e2d960e _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.352 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1fbc523f-accf-4848-80b7-6d997e0c65bf", "name": "vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp", "status": "ACTIVE", "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "user_id": "2143e69e49fd49db99c8737c973c1ea5", "metadata": {"metering.server_group": "9d7a68be-d216-4b06-b611-878d356c6d68"}, "hostId": "dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852", "image": {"id": "06e6231d-0a77-4b09-acb3-e7faf5a777be", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/06e6231d-0a77-4b09-acb3-e7faf5a777be"}]}, "flavor": {"id": "0fc2e5b1-b522-4c52-bdef-97db09e458e4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/0fc2e5b1-b522-4c52-bdef-97db09e458e4"}]}, "created": "2025-12-10T19:59:01Z", "updated": "2025-12-10T19:59:11Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.7", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:85:db:85"}, {"version": 4, "addr": "192.168.122.237", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:85:db:85"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1fbc523f-accf-4848-80b7-6d997e0c65bf"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1fbc523f-accf-4848-80b7-6d997e0c65bf"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-10T19:59:10.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.352 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1fbc523f-accf-4848-80b7-6d997e0c65bf used request id req-af3587e1-5b3d-4403-a0f3-fa4e3e2d960e request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.353 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1fbc523f-accf-4848-80b7-6d997e0c65bf', 'name': 'vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {'metering.server_group': '9d7a68be-d216-4b06-b611-878d356c6d68'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.356 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ac2c8050-72b5-419c-ba99-c4feeb26147a', 'name': 'vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {'metering.server_group': '9d7a68be-d216-4b06-b611-878d356c6d68'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.356 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.356 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T19:59:43.357206) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.358 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.358 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.359 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.359 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T19:59:43.359195) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.390 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.392 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.393 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.421 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.422 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.424 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.452 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.454 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.455 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.457 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.459 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.459 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T19:59:43.459681) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.461 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.461 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.462 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.462 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.463 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T19:59:43.462450) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.469 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.475 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1fbc523f-accf-4848-80b7-6d997e0c65bf / tapb4b01034-4b inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.475 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.480 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.packets volume: 57 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.481 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.481 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.481 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.482 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.482 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.483 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T19:59:43.481883) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.483 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.483 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.483 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.484 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.484 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.484 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.484 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T19:59:43.484229) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.485 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.485 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.485 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.486 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.486 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.486 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.486 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.486 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.487 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.bytes volume: 1511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.487 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T19:59:43.486421) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.487 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.bytes volume: 7478 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.488 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.488 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.488 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.488 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.489 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T19:59:43.488715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.489 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.489 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.bytes.delta volume: 2742 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.490 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.490 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.490 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.490 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.490 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T19:59:43.490623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.512 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/memory.usage volume: 48.76953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.536 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/memory.usage volume: 33.2890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.559 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/memory.usage volume: 49.06640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.560 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.560 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.560 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.560 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.560 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.560 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp>]
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.562 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.562 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.562 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.562 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.562 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.bytes volume: 8490 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.563 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.563 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.563 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.563 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.563 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.564 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-10T19:59:43.560550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.564 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.564 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.564 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.565 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T19:59:43.562238) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.565 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.565 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.566 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T19:59:43.563791) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.566 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.567 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.567 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.567 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.packets volume: 65 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T19:59:43.567004) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.568 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.568 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.568 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T19:59:43.568613) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.569 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.569 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.570 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.570 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.570 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.570 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T19:59:43.570089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.571 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T19:59:43.571444) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.641 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.641 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.641 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.748 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.755 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.756 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.884 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.885 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.885 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.886 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.886 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.886 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.886 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.887 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.887 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/cpu volume: 38340000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.888 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/cpu volume: 29280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.887 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T19:59:43.887060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.888 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/cpu volume: 242540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.889 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.889 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.889 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.889 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.890 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.890 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 425951231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.890 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T19:59:43.890075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.891 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 63853652 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.891 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 49706577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.891 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.latency volume: 398719696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.892 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.latency volume: 103443581 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.892 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.latency volume: 86126104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.892 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.latency volume: 365261803 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.893 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.latency volume: 76908904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.893 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.latency volume: 59898361 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.894 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.894 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.895 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.895 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.896 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T19:59:43.895248) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.896 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.897 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.897 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.897 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.898 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.898 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.898 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.899 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.899 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.899 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.900 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T19:59:43.900045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.900 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.901 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.901 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.901 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.902 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.902 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.902 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.903 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.903 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.904 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.904 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.904 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.904 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.904 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.904 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.905 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.905 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.bytes volume: 41590784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.905 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.906 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T19:59:43.904341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.906 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.bytes volume: 41848832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.906 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.907 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.908 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.908 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.908 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.909 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.909 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T19:59:43.909158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.909 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.910 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.911 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.912 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 816753194 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T19:59:43.911684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.912 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 10242364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.912 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.912 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.latency volume: 1388010740 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.913 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.latency volume: 13598806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.913 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.913 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.latency volume: 1284178296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.914 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.latency volume: 10530105 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.914 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.915 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.915 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.915 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.915 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.915 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.915 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.915 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.916 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.916 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T19:59:43.915755) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.917 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.requests volume: 218 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.917 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.917 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.917 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.requests volume: 238 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.918 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.918 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.919 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.919 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.919 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.919 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.919 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes.delta volume: 252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.920 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.920 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.bytes.delta volume: 3599 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.920 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.921 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T19:59:43.919669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.921 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.921 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.921 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.921 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.922 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.922 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp>]
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-10T19:59:43.921836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 19:59:43.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 19:59:44 compute-0 nova_compute[189279]: 2025-12-10 19:59:44.191 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:44 compute-0 podman[242332]: 2025-12-10 19:59:44.798928532 +0000 UTC m=+0.095844018 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 19:59:45 compute-0 nova_compute[189279]: 2025-12-10 19:59:45.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:59:46 compute-0 nova_compute[189279]: 2025-12-10 19:59:46.509 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:47 compute-0 nova_compute[189279]: 2025-12-10 19:59:47.501 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:59:47 compute-0 nova_compute[189279]: 2025-12-10 19:59:47.505 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:59:48 compute-0 nova_compute[189279]: 2025-12-10 19:59:48.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:59:48 compute-0 nova_compute[189279]: 2025-12-10 19:59:48.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 19:59:49 compute-0 nova_compute[189279]: 2025-12-10 19:59:49.195 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:49 compute-0 nova_compute[189279]: 2025-12-10 19:59:49.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:59:49 compute-0 nova_compute[189279]: 2025-12-10 19:59:49.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:59:50 compute-0 podman[242358]: 2025-12-10 19:59:50.124849524 +0000 UTC m=+0.089306013 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 19:59:50 compute-0 podman[242360]: 2025-12-10 19:59:50.129536519 +0000 UTC m=+0.087742370 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, version=9.4, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, vcs-type=git, build-date=2024-09-18T21:23:30, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 19:59:50 compute-0 podman[242359]: 2025-12-10 19:59:50.138660405 +0000 UTC m=+0.092384265 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Dec 10 19:59:50 compute-0 nova_compute[189279]: 2025-12-10 19:59:50.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:59:50 compute-0 nova_compute[189279]: 2025-12-10 19:59:50.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 19:59:51 compute-0 nova_compute[189279]: 2025-12-10 19:59:51.269 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 19:59:51 compute-0 nova_compute[189279]: 2025-12-10 19:59:51.270 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 19:59:51 compute-0 nova_compute[189279]: 2025-12-10 19:59:51.270 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 19:59:51 compute-0 nova_compute[189279]: 2025-12-10 19:59:51.512 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:52 compute-0 ovn_controller[97701]: 2025-12-10T19:59:52Z|00051|memory_trim|INFO|Detected inactivity (last active 30035 ms ago): trimming memory
Dec 10 19:59:52 compute-0 nova_compute[189279]: 2025-12-10 19:59:52.777 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Updating instance_info_cache with network_info: [{"id": "5d3f5317-707c-4080-a612-71018c7ba2ed", "address": "fa:16:3e:af:37:97", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d3f5317-70", "ovs_interfaceid": "5d3f5317-707c-4080-a612-71018c7ba2ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 19:59:52 compute-0 nova_compute[189279]: 2025-12-10 19:59:52.794 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 19:59:52 compute-0 nova_compute[189279]: 2025-12-10 19:59:52.794 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 19:59:52 compute-0 nova_compute[189279]: 2025-12-10 19:59:52.795 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:59:52 compute-0 nova_compute[189279]: 2025-12-10 19:59:52.795 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:59:52 compute-0 nova_compute[189279]: 2025-12-10 19:59:52.796 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:59:52 compute-0 nova_compute[189279]: 2025-12-10 19:59:52.820 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:52 compute-0 nova_compute[189279]: 2025-12-10 19:59:52.820 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:52 compute-0 nova_compute[189279]: 2025-12-10 19:59:52.821 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:52 compute-0 nova_compute[189279]: 2025-12-10 19:59:52.821 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 19:59:52 compute-0 nova_compute[189279]: 2025-12-10 19:59:52.916 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:52 compute-0 nova_compute[189279]: 2025-12-10 19:59:52.980 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:52 compute-0 nova_compute[189279]: 2025-12-10 19:59:52.981 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.041 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.042 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.103 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.105 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.187 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.195 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.259 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.260 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.320 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.322 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.388 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.390 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.476 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.487 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.577 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.578 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.661 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.662 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.746 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.747 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 19:59:53 compute-0 nova_compute[189279]: 2025-12-10 19:59:53.845 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 19:59:54 compute-0 nova_compute[189279]: 2025-12-10 19:59:54.201 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:54 compute-0 nova_compute[189279]: 2025-12-10 19:59:54.253 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 19:59:54 compute-0 nova_compute[189279]: 2025-12-10 19:59:54.255 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4774MB free_disk=72.33039855957031GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 19:59:54 compute-0 nova_compute[189279]: 2025-12-10 19:59:54.255 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 19:59:54 compute-0 nova_compute[189279]: 2025-12-10 19:59:54.255 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 19:59:54 compute-0 nova_compute[189279]: 2025-12-10 19:59:54.844 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 19:59:54 compute-0 nova_compute[189279]: 2025-12-10 19:59:54.845 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ac2c8050-72b5-419c-ba99-c4feeb26147a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 19:59:54 compute-0 nova_compute[189279]: 2025-12-10 19:59:54.845 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 1fbc523f-accf-4848-80b7-6d997e0c65bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 19:59:54 compute-0 nova_compute[189279]: 2025-12-10 19:59:54.845 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 19:59:54 compute-0 nova_compute[189279]: 2025-12-10 19:59:54.845 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 19:59:56 compute-0 podman[242453]: 2025-12-10 19:59:56.143664003 +0000 UTC m=+0.111195632 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 10 19:59:56 compute-0 podman[242454]: 2025-12-10 19:59:56.146563031 +0000 UTC m=+0.108484839 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 19:59:56 compute-0 nova_compute[189279]: 2025-12-10 19:59:56.516 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:56 compute-0 nova_compute[189279]: 2025-12-10 19:59:56.897 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 19:59:56 compute-0 nova_compute[189279]: 2025-12-10 19:59:56.959 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 19:59:57 compute-0 nova_compute[189279]: 2025-12-10 19:59:57.069 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 19:59:57 compute-0 nova_compute[189279]: 2025-12-10 19:59:57.070 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 19:59:57 compute-0 nova_compute[189279]: 2025-12-10 19:59:57.070 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 19:59:57 compute-0 nova_compute[189279]: 2025-12-10 19:59:57.071 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 10 19:59:59 compute-0 nova_compute[189279]: 2025-12-10 19:59:59.205 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 19:59:59 compute-0 podman[203484]: time="2025-12-10T19:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 19:59:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 19:59:59 compute-0 podman[203484]: @ - - [10/Dec/2025:19:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Dec 10 20:00:00 compute-0 podman[242497]: 2025-12-10 20:00:00.165820199 +0000 UTC m=+0.131748704 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Dec 10 20:00:01 compute-0 openstack_network_exporter[205632]: ERROR   20:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:00:01 compute-0 openstack_network_exporter[205632]: ERROR   20:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:00:01 compute-0 openstack_network_exporter[205632]: ERROR   20:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:00:01 compute-0 openstack_network_exporter[205632]: ERROR   20:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:00:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:00:01 compute-0 openstack_network_exporter[205632]: ERROR   20:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:00:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:00:01 compute-0 nova_compute[189279]: 2025-12-10 20:00:01.519 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:04 compute-0 nova_compute[189279]: 2025-12-10 20:00:04.208 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:06 compute-0 podman[242521]: 2025-12-10 20:00:06.147392871 +0000 UTC m=+0.111279463 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 10 20:00:06 compute-0 nova_compute[189279]: 2025-12-10 20:00:06.523 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:09 compute-0 nova_compute[189279]: 2025-12-10 20:00:09.210 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:11 compute-0 nova_compute[189279]: 2025-12-10 20:00:11.525 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:14 compute-0 podman[242541]: 2025-12-10 20:00:14.168943421 +0000 UTC m=+0.140785756 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, release=1755695350, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal)
Dec 10 20:00:14 compute-0 nova_compute[189279]: 2025-12-10 20:00:14.213 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:15 compute-0 podman[242562]: 2025-12-10 20:00:15.094253965 +0000 UTC m=+0.069623953 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 20:00:16 compute-0 nova_compute[189279]: 2025-12-10 20:00:16.527 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:19 compute-0 nova_compute[189279]: 2025-12-10 20:00:19.216 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:21 compute-0 podman[242589]: 2025-12-10 20:00:21.093089834 +0000 UTC m=+0.068801732 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, architecture=x86_64, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0, distribution-scope=public)
Dec 10 20:00:21 compute-0 podman[242587]: 2025-12-10 20:00:21.102779664 +0000 UTC m=+0.086047295 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:00:21 compute-0 podman[242588]: 2025-12-10 20:00:21.114763316 +0000 UTC m=+0.093936807 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 10 20:00:21 compute-0 nova_compute[189279]: 2025-12-10 20:00:21.530 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:00:23.376 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:00:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:00:23.377 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:00:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:00:23.377 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:00:24 compute-0 nova_compute[189279]: 2025-12-10 20:00:24.218 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:25 compute-0 sshd-session[242643]: Received disconnect from 193.46.255.217 port 32740:11:  [preauth]
Dec 10 20:00:25 compute-0 sshd-session[242643]: Disconnected from authenticating user root 193.46.255.217 port 32740 [preauth]
Dec 10 20:00:26 compute-0 nova_compute[189279]: 2025-12-10 20:00:26.534 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:27 compute-0 podman[242645]: 2025-12-10 20:00:27.112291728 +0000 UTC m=+0.087185147 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 10 20:00:27 compute-0 podman[242646]: 2025-12-10 20:00:27.112447102 +0000 UTC m=+0.081684849 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 20:00:29 compute-0 nova_compute[189279]: 2025-12-10 20:00:29.222 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:29 compute-0 podman[203484]: time="2025-12-10T20:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:00:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:00:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec 10 20:00:31 compute-0 podman[242684]: 2025-12-10 20:00:31.172999157 +0000 UTC m=+0.138030823 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:00:31 compute-0 openstack_network_exporter[205632]: ERROR   20:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:00:31 compute-0 openstack_network_exporter[205632]: ERROR   20:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:00:31 compute-0 openstack_network_exporter[205632]: ERROR   20:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:00:31 compute-0 openstack_network_exporter[205632]: ERROR   20:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:00:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:00:31 compute-0 openstack_network_exporter[205632]: ERROR   20:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:00:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:00:31 compute-0 nova_compute[189279]: 2025-12-10 20:00:31.537 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:34 compute-0 nova_compute[189279]: 2025-12-10 20:00:34.225 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:36 compute-0 nova_compute[189279]: 2025-12-10 20:00:36.543 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:37 compute-0 podman[242710]: 2025-12-10 20:00:37.081186905 +0000 UTC m=+0.067823455 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 10 20:00:39 compute-0 nova_compute[189279]: 2025-12-10 20:00:39.229 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:41 compute-0 nova_compute[189279]: 2025-12-10 20:00:41.541 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:44 compute-0 nova_compute[189279]: 2025-12-10 20:00:44.232 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:44 compute-0 podman[242728]: 2025-12-10 20:00:44.7649494 +0000 UTC m=+0.078194973 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc.)
Dec 10 20:00:46 compute-0 podman[242750]: 2025-12-10 20:00:46.076478631 +0000 UTC m=+0.055170525 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:00:46 compute-0 nova_compute[189279]: 2025-12-10 20:00:46.543 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:49 compute-0 nova_compute[189279]: 2025-12-10 20:00:49.236 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:51 compute-0 nova_compute[189279]: 2025-12-10 20:00:51.546 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:52 compute-0 podman[242775]: 2025-12-10 20:00:52.109277732 +0000 UTC m=+0.090117225 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 10 20:00:52 compute-0 podman[242777]: 2025-12-10 20:00:52.120130984 +0000 UTC m=+0.092873429 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, version=9.4, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git, name=ubi9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, release-0.7.12=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc.)
Dec 10 20:00:52 compute-0 podman[242776]: 2025-12-10 20:00:52.137537502 +0000 UTC m=+0.108302154 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:00:52 compute-0 nova_compute[189279]: 2025-12-10 20:00:52.781 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:00:52 compute-0 nova_compute[189279]: 2025-12-10 20:00:52.781 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:00:52 compute-0 nova_compute[189279]: 2025-12-10 20:00:52.807 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:00:52 compute-0 nova_compute[189279]: 2025-12-10 20:00:52.807 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:00:52 compute-0 nova_compute[189279]: 2025-12-10 20:00:52.807 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:00:53 compute-0 nova_compute[189279]: 2025-12-10 20:00:53.005 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:00:53 compute-0 nova_compute[189279]: 2025-12-10 20:00:53.006 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:00:53 compute-0 nova_compute[189279]: 2025-12-10 20:00:53.006 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:00:53 compute-0 nova_compute[189279]: 2025-12-10 20:00:53.007 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12986b74-7b15-4ff4-9019-081950660d4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.239 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.330 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updating instance_info_cache with network_info: [{"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.353 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.353 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.353 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.354 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.354 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.354 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.354 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.355 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.355 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.355 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.380 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.381 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.381 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.381 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.670 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.732 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.733 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.795 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.796 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.857 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.858 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.920 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.930 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.996 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:00:54 compute-0 nova_compute[189279]: 2025-12-10 20:00:54.997 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.057 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.058 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.115 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.116 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.176 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.184 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.256 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.257 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.314 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.315 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.375 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.376 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.434 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.768 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.770 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4797MB free_disk=72.33039855957031GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.770 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.770 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.867 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.868 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ac2c8050-72b5-419c-ba99-c4feeb26147a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.869 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 1fbc523f-accf-4848-80b7-6d997e0c65bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.869 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.869 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.951 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.969 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.971 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:00:55 compute-0 nova_compute[189279]: 2025-12-10 20:00:55.972 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:00:56 compute-0 nova_compute[189279]: 2025-12-10 20:00:56.547 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:57 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:00:57.070 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:00:57 compute-0 nova_compute[189279]: 2025-12-10 20:00:57.071 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:57 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:00:57.072 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 20:00:58 compute-0 podman[242867]: 2025-12-10 20:00:58.092905401 +0000 UTC m=+0.072081729 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:00:58 compute-0 podman[242866]: 2025-12-10 20:00:58.112656083 +0000 UTC m=+0.091235356 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:00:59 compute-0 nova_compute[189279]: 2025-12-10 20:00:59.241 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:00:59 compute-0 podman[203484]: time="2025-12-10T20:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:00:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:00:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec 10 20:01:01 compute-0 CROND[242906]: (root) CMD (run-parts /etc/cron.hourly)
Dec 10 20:01:01 compute-0 run-parts[242909]: (/etc/cron.hourly) starting 0anacron
Dec 10 20:01:01 compute-0 run-parts[242915]: (/etc/cron.hourly) finished 0anacron
Dec 10 20:01:01 compute-0 CROND[242905]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 10 20:01:01 compute-0 openstack_network_exporter[205632]: ERROR   20:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:01:01 compute-0 openstack_network_exporter[205632]: ERROR   20:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:01:01 compute-0 openstack_network_exporter[205632]: ERROR   20:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:01:01 compute-0 openstack_network_exporter[205632]: ERROR   20:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:01:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:01:01 compute-0 openstack_network_exporter[205632]: ERROR   20:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:01:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.463 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "26729739-a300-43fe-8678-5294ed41f6ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.464 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.486 189283 DEBUG nova.compute.manager [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.552 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.574 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.574 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.584 189283 DEBUG nova.virt.hardware [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.584 189283 INFO nova.compute.claims [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Claim successful on node compute-0.ctlplane.example.com
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.815 189283 DEBUG nova.compute.provider_tree [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.834 189283 DEBUG nova.scheduler.client.report [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.861 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.862 189283 DEBUG nova.compute.manager [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.911 189283 DEBUG nova.compute.manager [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.912 189283 DEBUG nova.network.neutron [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.931 189283 INFO nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 20:01:01 compute-0 nova_compute[189279]: 2025-12-10 20:01:01.962 189283 DEBUG nova.compute.manager [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.076 189283 DEBUG nova.compute.manager [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.078 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.079 189283 INFO nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Creating image(s)
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.079 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "/var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.079 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.080 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.095 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:02 compute-0 podman[242916]: 2025-12-10 20:01:02.155319903 +0000 UTC m=+0.130049849 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.161 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.162 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "193edf3941027c090c206b4992bbea3ae5563eb9" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.162 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "193edf3941027c090c206b4992bbea3ae5563eb9" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.173 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.234 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.235 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9,backing_fmt=raw /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.280 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9,backing_fmt=raw /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.281 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "193edf3941027c090c206b4992bbea3ae5563eb9" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.281 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.344 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.345 189283 DEBUG nova.virt.disk.api [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Checking if we can resize image /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.345 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.410 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.411 189283 DEBUG nova.virt.disk.api [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Cannot resize image /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.412 189283 DEBUG nova.objects.instance [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'migration_context' on Instance uuid 26729739-a300-43fe-8678-5294ed41f6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.427 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "/var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.427 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.428 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.444 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.504 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.505 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.505 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.517 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.575 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.576 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.617 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.618 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.618 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.681 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.682 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.683 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Ensure instance console log exists: /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.683 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.684 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:01:02 compute-0 nova_compute[189279]: 2025-12-10 20:01:02.684 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:01:03 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:03.075 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:01:04 compute-0 nova_compute[189279]: 2025-12-10 20:01:04.243 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:04 compute-0 nova_compute[189279]: 2025-12-10 20:01:04.556 189283 DEBUG nova.network.neutron [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Successfully updated port: 0785494f-981a-4c23-8e42-a15d0c582bfb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 20:01:04 compute-0 nova_compute[189279]: 2025-12-10 20:01:04.572 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:01:04 compute-0 nova_compute[189279]: 2025-12-10 20:01:04.573 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquired lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:01:04 compute-0 nova_compute[189279]: 2025-12-10 20:01:04.573 189283 DEBUG nova.network.neutron [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:01:04 compute-0 nova_compute[189279]: 2025-12-10 20:01:04.640 189283 DEBUG nova.compute.manager [req-123af409-d8d4-4de6-9233-67df5ba99388 req-6ec9aba6-0b19-4105-bfdf-190702a501a3 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Received event network-changed-0785494f-981a-4c23-8e42-a15d0c582bfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:01:04 compute-0 nova_compute[189279]: 2025-12-10 20:01:04.640 189283 DEBUG nova.compute.manager [req-123af409-d8d4-4de6-9233-67df5ba99388 req-6ec9aba6-0b19-4105-bfdf-190702a501a3 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Refreshing instance network info cache due to event network-changed-0785494f-981a-4c23-8e42-a15d0c582bfb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:01:04 compute-0 nova_compute[189279]: 2025-12-10 20:01:04.641 189283 DEBUG oslo_concurrency.lockutils [req-123af409-d8d4-4de6-9233-67df5ba99388 req-6ec9aba6-0b19-4105-bfdf-190702a501a3 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:01:05 compute-0 nova_compute[189279]: 2025-12-10 20:01:05.317 189283 DEBUG nova.network.neutron [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.556 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.623 189283 DEBUG nova.network.neutron [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Updating instance_info_cache with network_info: [{"id": "0785494f-981a-4c23-8e42-a15d0c582bfb", "address": "fa:16:3e:0b:ad:37", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0785494f-98", "ovs_interfaceid": "0785494f-981a-4c23-8e42-a15d0c582bfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.664 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Releasing lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.665 189283 DEBUG nova.compute.manager [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Instance network_info: |[{"id": "0785494f-981a-4c23-8e42-a15d0c582bfb", "address": "fa:16:3e:0b:ad:37", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0785494f-98", "ovs_interfaceid": "0785494f-981a-4c23-8e42-a15d0c582bfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.666 189283 DEBUG oslo_concurrency.lockutils [req-123af409-d8d4-4de6-9233-67df5ba99388 req-6ec9aba6-0b19-4105-bfdf-190702a501a3 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.666 189283 DEBUG nova.network.neutron [req-123af409-d8d4-4de6-9233-67df5ba99388 req-6ec9aba6-0b19-4105-bfdf-190702a501a3 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Refreshing network info cache for port 0785494f-981a-4c23-8e42-a15d0c582bfb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.670 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Start _get_guest_xml network_info=[{"id": "0785494f-981a-4c23-8e42-a15d0c582bfb", "address": "fa:16:3e:0b:ad:37", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0785494f-98", "ovs_interfaceid": "0785494f-981a-4c23-8e42-a15d0c582bfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-10T19:52:04Z,direct_url=<?>,disk_format='qcow2',id=06e6231d-0a77-4b09-acb3-e7faf5a777be,min_disk=0,min_ram=0,name='cirros',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-10T19:52:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 1, 'encryption_options': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.678 189283 WARNING nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.901 189283 DEBUG nova.virt.libvirt.host [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.903 189283 DEBUG nova.virt.libvirt.host [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.927 189283 DEBUG nova.virt.libvirt.host [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.929 189283 DEBUG nova.virt.libvirt.host [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.930 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.930 189283 DEBUG nova.virt.hardware [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T19:52:09Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='0fc2e5b1-b522-4c52-bdef-97db09e458e4',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-10T19:52:04Z,direct_url=<?>,disk_format='qcow2',id=06e6231d-0a77-4b09-acb3-e7faf5a777be,min_disk=0,min_ram=0,name='cirros',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-10T19:52:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.931 189283 DEBUG nova.virt.hardware [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.932 189283 DEBUG nova.virt.hardware [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.933 189283 DEBUG nova.virt.hardware [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.933 189283 DEBUG nova.virt.hardware [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.934 189283 DEBUG nova.virt.hardware [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.934 189283 DEBUG nova.virt.hardware [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.935 189283 DEBUG nova.virt.hardware [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.936 189283 DEBUG nova.virt.hardware [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.936 189283 DEBUG nova.virt.hardware [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.937 189283 DEBUG nova.virt.hardware [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.942 189283 DEBUG nova.virt.libvirt.vif [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:01:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7',id=5,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='9d7a68be-d216-4b06-b611-878d356c6d68'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-8y4xv3jg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:01:02Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNTc3ODU1ODY0NDAzODgzMTY3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA1Nzc4NTU4NjQ0MDM4ODMxNjc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDU3Nzg1NTg2NDQwMzg4MzE2Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA1Nzc4NTU4NjQ0MDM4ODMxNjc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNTc3ODU1ODY0NDAzODgzMTY3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNTc3ODU1ODY0NDAzODgzMTY3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Dec 10 20:01:06 compute-0 nova_compute[189279]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDU3Nzg1NTg2NDQwMzg4MzE2Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA1Nzc4NTU4NjQ0MDM4ODMxNjc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNTc3ODU1ODY0NDAzODgzMTY3PT0tLQo=',user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=26729739-a300-43fe-8678-5294ed41f6ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0785494f-981a-4c23-8e42-a15d0c582bfb", "address": "fa:16:3e:0b:ad:37", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0785494f-98", "ovs_interfaceid": "0785494f-981a-4c23-8e42-a15d0c582bfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.943 189283 DEBUG nova.network.os_vif_util [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "0785494f-981a-4c23-8e42-a15d0c582bfb", "address": "fa:16:3e:0b:ad:37", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0785494f-98", "ovs_interfaceid": "0785494f-981a-4c23-8e42-a15d0c582bfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.945 189283 DEBUG nova.network.os_vif_util [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:ad:37,bridge_name='br-int',has_traffic_filtering=True,id=0785494f-981a-4c23-8e42-a15d0c582bfb,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0785494f-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.947 189283 DEBUG nova.objects.instance [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'pci_devices' on Instance uuid 26729739-a300-43fe-8678-5294ed41f6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.972 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] End _get_guest_xml xml=<domain type="kvm">
Dec 10 20:01:06 compute-0 nova_compute[189279]:   <uuid>26729739-a300-43fe-8678-5294ed41f6ed</uuid>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   <name>instance-00000005</name>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   <memory>524288</memory>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   <metadata>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <nova:name>vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7</nova:name>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 20:01:06</nova:creationTime>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <nova:flavor name="m1.small">
Dec 10 20:01:06 compute-0 nova_compute[189279]:         <nova:memory>512</nova:memory>
Dec 10 20:01:06 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 20:01:06 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 20:01:06 compute-0 nova_compute[189279]:         <nova:ephemeral>1</nova:ephemeral>
Dec 10 20:01:06 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 20:01:06 compute-0 nova_compute[189279]:         <nova:user uuid="2143e69e49fd49db99c8737c973c1ea5">admin</nova:user>
Dec 10 20:01:06 compute-0 nova_compute[189279]:         <nova:project uuid="fe518ea62a94467e823b2b1046c57a2e">admin</nova:project>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="06e6231d-0a77-4b09-acb3-e7faf5a777be"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 20:01:06 compute-0 nova_compute[189279]:         <nova:port uuid="0785494f-981a-4c23-8e42-a15d0c582bfb">
Dec 10 20:01:06 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="192.168.0.54" ipVersion="4"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   </metadata>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <system>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <entry name="serial">26729739-a300-43fe-8678-5294ed41f6ed</entry>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <entry name="uuid">26729739-a300-43fe-8678-5294ed41f6ed</entry>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     </system>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   <os>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   </os>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   <features>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <apic/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   </features>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   </clock>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   </cpu>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   <devices>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <target dev="vdb" bus="virtio"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.config"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:0b:ad:37"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <target dev="tap0785494f-98"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     </interface>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/console.log" append="off"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     </serial>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <video>
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     </video>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     </rng>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 20:01:06 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 20:01:06 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 20:01:06 compute-0 nova_compute[189279]:   </devices>
Dec 10 20:01:06 compute-0 nova_compute[189279]: </domain>
Dec 10 20:01:06 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.974 189283 DEBUG nova.compute.manager [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Preparing to wait for external event network-vif-plugged-0785494f-981a-4c23-8e42-a15d0c582bfb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.975 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "26729739-a300-43fe-8678-5294ed41f6ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.976 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.977 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.978 189283 DEBUG nova.virt.libvirt.vif [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:01:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7',id=5,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='9d7a68be-d216-4b06-b611-878d356c6d68'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-8y4xv3jg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:01:02Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNTc3ODU1ODY0NDAzODgzMTY3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA1Nzc4NTU4NjQ0MDM4ODMxNjc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDU3Nzg1NTg2NDQwMzg4MzE2Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA1Nzc4NTU4NjQ0MDM4ODMxNjc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNTc3ODU1ODY0NDAzODgzMTY3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNTc3ODU1ODY0NDAzODgzMTY3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Dec 10 20:01:06 compute-0 nova_compute[189279]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDU3Nzg1NTg2NDQwMzg4MzE2Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA1Nzc4NTU4NjQ0MDM4ODMxNjc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNTc3ODU1ODY0NDAzODgzMTY3PT0tLQo=',user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=26729739-a300-43fe-8678-5294ed41f6ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0785494f-981a-4c23-8e42-a15d0c582bfb", "address": "fa:16:3e:0b:ad:37", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0785494f-98", "ovs_interfaceid": "0785494f-981a-4c23-8e42-a15d0c582bfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.979 189283 DEBUG nova.network.os_vif_util [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "0785494f-981a-4c23-8e42-a15d0c582bfb", "address": "fa:16:3e:0b:ad:37", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0785494f-98", "ovs_interfaceid": "0785494f-981a-4c23-8e42-a15d0c582bfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.981 189283 DEBUG nova.network.os_vif_util [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:ad:37,bridge_name='br-int',has_traffic_filtering=True,id=0785494f-981a-4c23-8e42-a15d0c582bfb,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0785494f-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.982 189283 DEBUG os_vif [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:ad:37,bridge_name='br-int',has_traffic_filtering=True,id=0785494f-981a-4c23-8e42-a15d0c582bfb,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0785494f-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.984 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.984 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.985 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.991 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.992 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0785494f-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:01:08 compute-0 rsyslogd[236537]: message too long (8192) with configured size 8096, begin of message is: 2025-12-10 20:01:06.942 189283 DEBUG nova.virt.libvirt.vif [None req-aeea12dd-94 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.993 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0785494f-98, col_values=(('external_ids', {'iface-id': '0785494f-981a-4c23-8e42-a15d0c582bfb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:ad:37', 'vm-uuid': '26729739-a300-43fe-8678-5294ed41f6ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:01:08 compute-0 rsyslogd[236537]: message too long (8192) with configured size 8096, begin of message is: 2025-12-10 20:01:06.978 189283 DEBUG nova.virt.libvirt.vif [None req-aeea12dd-94 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.995 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:06 compute-0 nova_compute[189279]: 2025-12-10 20:01:06.998 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:01:08 compute-0 NetworkManager[56238]: <info>  [1765396868.1452] manager: (tap0785494f-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.155 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.157 189283 INFO os_vif [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:ad:37,bridge_name='br-int',has_traffic_filtering=True,id=0785494f-981a-4c23-8e42-a15d0c582bfb,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0785494f-98')
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.207 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.208 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.208 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.208 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No VIF found with MAC fa:16:3e:0b:ad:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.208 189283 INFO nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Using config drive
Dec 10 20:01:08 compute-0 podman[242969]: 2025-12-10 20:01:08.235224149 +0000 UTC m=+0.072218971 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.328 189283 DEBUG nova.network.neutron [req-123af409-d8d4-4de6-9233-67df5ba99388 req-6ec9aba6-0b19-4105-bfdf-190702a501a3 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Updated VIF entry in instance network info cache for port 0785494f-981a-4c23-8e42-a15d0c582bfb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.329 189283 DEBUG nova.network.neutron [req-123af409-d8d4-4de6-9233-67df5ba99388 req-6ec9aba6-0b19-4105-bfdf-190702a501a3 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Updating instance_info_cache with network_info: [{"id": "0785494f-981a-4c23-8e42-a15d0c582bfb", "address": "fa:16:3e:0b:ad:37", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0785494f-98", "ovs_interfaceid": "0785494f-981a-4c23-8e42-a15d0c582bfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.348 189283 DEBUG oslo_concurrency.lockutils [req-123af409-d8d4-4de6-9233-67df5ba99388 req-6ec9aba6-0b19-4105-bfdf-190702a501a3 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.521 189283 INFO nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Creating config drive at /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.config
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.536 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ehypycc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.668 189283 DEBUG oslo_concurrency.processutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ehypycc" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:08 compute-0 kernel: tap0785494f-98: entered promiscuous mode
Dec 10 20:01:08 compute-0 NetworkManager[56238]: <info>  [1765396868.7533] manager: (tap0785494f-98): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Dec 10 20:01:08 compute-0 ovn_controller[97701]: 2025-12-10T20:01:08Z|00052|binding|INFO|Claiming lport 0785494f-981a-4c23-8e42-a15d0c582bfb for this chassis.
Dec 10 20:01:08 compute-0 ovn_controller[97701]: 2025-12-10T20:01:08Z|00053|binding|INFO|0785494f-981a-4c23-8e42-a15d0c582bfb: Claiming fa:16:3e:0b:ad:37 192.168.0.54
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.754 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:08.772 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:ad:37 192.168.0.54'], port_security=['fa:16:3e:0b:ad:37 192.168.0.54'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-pjemjxzxegr5-tu426txpq63m-ebreuwdsmaq4-port-lzq7unw5gr5p', 'neutron:cidrs': '192.168.0.54/24', 'neutron:device_id': '26729739-a300-43fe-8678-5294ed41f6ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-pjemjxzxegr5-tu426txpq63m-ebreuwdsmaq4-port-lzq7unw5gr5p', 'neutron:project_id': 'fe518ea62a94467e823b2b1046c57a2e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dbfca0ae-785e-46d9-973d-467188fc7f6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a76816-4d6e-46b1-bfa0-919a54b6e056, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=0785494f-981a-4c23-8e42-a15d0c582bfb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:01:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:08.773 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 0785494f-981a-4c23-8e42-a15d0c582bfb in datapath e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 bound to our chassis
Dec 10 20:01:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:08.775 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e55a1ff5-f742-4bad-ae9c-2f6d4795fa29
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.776 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:08 compute-0 ovn_controller[97701]: 2025-12-10T20:01:08Z|00054|binding|INFO|Setting lport 0785494f-981a-4c23-8e42-a15d0c582bfb ovn-installed in OVS
Dec 10 20:01:08 compute-0 ovn_controller[97701]: 2025-12-10T20:01:08Z|00055|binding|INFO|Setting lport 0785494f-981a-4c23-8e42-a15d0c582bfb up in Southbound
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.779 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.787 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:08.794 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[79208166-252d-4565-b03f-e4ddda9eaa41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:01:08 compute-0 systemd-udevd[243009]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:01:08 compute-0 NetworkManager[56238]: <info>  [1765396868.8159] device (tap0785494f-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 20:01:08 compute-0 NetworkManager[56238]: <info>  [1765396868.8219] device (tap0785494f-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 20:01:08 compute-0 systemd-machined[155642]: New machine qemu-5-instance-00000005.
Dec 10 20:01:08 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec 10 20:01:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:08.841 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ea84d8-59e7-4820-8ae7-f01946b927b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:01:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:08.846 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[80766e05-d5c7-492b-9c12-9affcc46c391]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:01:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:08.882 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[85071acc-5044-4bbb-96aa-3d43b724244e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:01:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:08.901 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[7193226d-3985-4c1a-9e44-d4ee7ac46668]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape55a1ff5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:f6:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 13, 'rx_bytes': 658, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 13, 'rx_bytes': 658, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 372629, 'reachable_time': 33508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243022, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:01:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:08.914 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd8897f-8ca1-45df-865b-759def9f6a88]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372645, 'tstamp': 372645}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243024, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372649, 'tstamp': 372649}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243024, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:01:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:08.916 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape55a1ff5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.917 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.918 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:08.921 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape55a1ff5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:01:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:08.921 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:01:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:08.922 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape55a1ff5-f0, col_values=(('external_ids', {'iface-id': 'f70c9140-d0bb-473b-94ef-0336fe52cbb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:01:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:08.922 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.957 189283 DEBUG nova.compute.manager [req-5fffa60c-ce23-4965-9c1d-6273e5e827ad req-df2d0750-2078-4817-a5a2-077dc851f552 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Received event network-vif-plugged-0785494f-981a-4c23-8e42-a15d0c582bfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.958 189283 DEBUG oslo_concurrency.lockutils [req-5fffa60c-ce23-4965-9c1d-6273e5e827ad req-df2d0750-2078-4817-a5a2-077dc851f552 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "26729739-a300-43fe-8678-5294ed41f6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.958 189283 DEBUG oslo_concurrency.lockutils [req-5fffa60c-ce23-4965-9c1d-6273e5e827ad req-df2d0750-2078-4817-a5a2-077dc851f552 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.958 189283 DEBUG oslo_concurrency.lockutils [req-5fffa60c-ce23-4965-9c1d-6273e5e827ad req-df2d0750-2078-4817-a5a2-077dc851f552 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:01:08 compute-0 nova_compute[189279]: 2025-12-10 20:01:08.958 189283 DEBUG nova.compute.manager [req-5fffa60c-ce23-4965-9c1d-6273e5e827ad req-df2d0750-2078-4817-a5a2-077dc851f552 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Processing event network-vif-plugged-0785494f-981a-4c23-8e42-a15d0c582bfb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.687 189283 DEBUG nova.compute.manager [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.688 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396869.6866236, 26729739-a300-43fe-8678-5294ed41f6ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.688 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] VM Started (Lifecycle Event)
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.700 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.705 189283 INFO nova.virt.libvirt.driver [-] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Instance spawned successfully.
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.705 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.722 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.736 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.742 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.743 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.743 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.744 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.744 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.745 189283 DEBUG nova.virt.libvirt.driver [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.776 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.777 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396869.6867917, 26729739-a300-43fe-8678-5294ed41f6ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.777 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] VM Paused (Lifecycle Event)
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.818 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.826 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765396869.6942723, 26729739-a300-43fe-8678-5294ed41f6ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.826 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] VM Resumed (Lifecycle Event)
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.838 189283 INFO nova.compute.manager [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Took 7.76 seconds to spawn the instance on the hypervisor.
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.838 189283 DEBUG nova.compute.manager [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.869 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.877 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.905 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.918 189283 INFO nova.compute.manager [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Took 8.38 seconds to build instance.
Dec 10 20:01:09 compute-0 nova_compute[189279]: 2025-12-10 20:01:09.935 189283 DEBUG oslo_concurrency.lockutils [None req-aeea12dd-94fb-4113-acb0-754e66b4f68a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:01:11 compute-0 nova_compute[189279]: 2025-12-10 20:01:11.044 189283 DEBUG nova.compute.manager [req-008063e2-3d18-4241-9023-334e5b311bb3 req-ada3f8a4-33cb-4d20-94d6-e82fd4a1c0a8 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Received event network-vif-plugged-0785494f-981a-4c23-8e42-a15d0c582bfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:01:11 compute-0 nova_compute[189279]: 2025-12-10 20:01:11.044 189283 DEBUG oslo_concurrency.lockutils [req-008063e2-3d18-4241-9023-334e5b311bb3 req-ada3f8a4-33cb-4d20-94d6-e82fd4a1c0a8 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "26729739-a300-43fe-8678-5294ed41f6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:01:11 compute-0 nova_compute[189279]: 2025-12-10 20:01:11.045 189283 DEBUG oslo_concurrency.lockutils [req-008063e2-3d18-4241-9023-334e5b311bb3 req-ada3f8a4-33cb-4d20-94d6-e82fd4a1c0a8 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:01:11 compute-0 nova_compute[189279]: 2025-12-10 20:01:11.045 189283 DEBUG oslo_concurrency.lockutils [req-008063e2-3d18-4241-9023-334e5b311bb3 req-ada3f8a4-33cb-4d20-94d6-e82fd4a1c0a8 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:01:11 compute-0 nova_compute[189279]: 2025-12-10 20:01:11.045 189283 DEBUG nova.compute.manager [req-008063e2-3d18-4241-9023-334e5b311bb3 req-ada3f8a4-33cb-4d20-94d6-e82fd4a1c0a8 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] No waiting events found dispatching network-vif-plugged-0785494f-981a-4c23-8e42-a15d0c582bfb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:01:11 compute-0 nova_compute[189279]: 2025-12-10 20:01:11.045 189283 WARNING nova.compute.manager [req-008063e2-3d18-4241-9023-334e5b311bb3 req-ada3f8a4-33cb-4d20-94d6-e82fd4a1c0a8 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Received unexpected event network-vif-plugged-0785494f-981a-4c23-8e42-a15d0c582bfb for instance with vm_state active and task_state None.
Dec 10 20:01:11 compute-0 nova_compute[189279]: 2025-12-10 20:01:11.559 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:11 compute-0 nova_compute[189279]: 2025-12-10 20:01:11.996 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:15 compute-0 podman[243033]: 2025-12-10 20:01:15.096554102 +0000 UTC m=+0.074090883 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, release=1755695350, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 10 20:01:16 compute-0 nova_compute[189279]: 2025-12-10 20:01:16.563 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:16 compute-0 nova_compute[189279]: 2025-12-10 20:01:16.998 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:17 compute-0 podman[243053]: 2025-12-10 20:01:17.097981852 +0000 UTC m=+0.078181143 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 20:01:21 compute-0 nova_compute[189279]: 2025-12-10 20:01:21.566 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:22 compute-0 nova_compute[189279]: 2025-12-10 20:01:22.000 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:23 compute-0 podman[243079]: 2025-12-10 20:01:23.129052096 +0000 UTC m=+0.097252926 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 10 20:01:23 compute-0 podman[243086]: 2025-12-10 20:01:23.145083416 +0000 UTC m=+0.091713586 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.tags=base rhel9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.component=ubi9-container, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, container_name=kepler, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9)
Dec 10 20:01:23 compute-0 podman[243080]: 2025-12-10 20:01:23.158928848 +0000 UTC m=+0.109596167 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:01:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:23.377 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:01:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:23.378 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:01:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:01:23.379 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:01:26 compute-0 nova_compute[189279]: 2025-12-10 20:01:26.569 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:27 compute-0 nova_compute[189279]: 2025-12-10 20:01:27.003 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:29 compute-0 podman[243134]: 2025-12-10 20:01:29.091559745 +0000 UTC m=+0.070879377 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 10 20:01:29 compute-0 podman[243135]: 2025-12-10 20:01:29.113761912 +0000 UTC m=+0.089578589 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 20:01:29 compute-0 podman[203484]: time="2025-12-10T20:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:01:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:01:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec 10 20:01:31 compute-0 openstack_network_exporter[205632]: ERROR   20:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:01:31 compute-0 openstack_network_exporter[205632]: ERROR   20:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:01:31 compute-0 openstack_network_exporter[205632]: ERROR   20:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:01:31 compute-0 openstack_network_exporter[205632]: ERROR   20:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:01:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:01:31 compute-0 openstack_network_exporter[205632]: ERROR   20:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:01:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:01:31 compute-0 nova_compute[189279]: 2025-12-10 20:01:31.571 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:32 compute-0 nova_compute[189279]: 2025-12-10 20:01:32.005 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:33 compute-0 podman[243175]: 2025-12-10 20:01:33.206319095 +0000 UTC m=+0.169611260 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 10 20:01:36 compute-0 nova_compute[189279]: 2025-12-10 20:01:36.574 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:37 compute-0 nova_compute[189279]: 2025-12-10 20:01:37.008 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:38 compute-0 ovn_controller[97701]: 2025-12-10T20:01:38Z|00056|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec 10 20:01:39 compute-0 podman[243201]: 2025-12-10 20:01:39.115349297 +0000 UTC m=+0.093984448 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 10 20:01:41 compute-0 nova_compute[189279]: 2025-12-10 20:01:41.577 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:42 compute-0 nova_compute[189279]: 2025-12-10 20:01:42.011 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.174 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.174 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.174 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.180 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.180 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.180 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.182 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '12986b74-7b15-4ff4-9019-081950660d4b', 'name': 'test_0', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.183 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.183 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.185 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.186 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1fbc523f-accf-4848-80b7-6d997e0c65bf', 'name': 'vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {'metering.server_group': '9d7a68be-d216-4b06-b611-878d356c6d68'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.189 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ac2c8050-72b5-419c-ba99-c4feeb26147a', 'name': 'vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {'metering.server_group': '9d7a68be-d216-4b06-b611-878d356c6d68'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.191 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 26729739-a300-43fe-8678-5294ed41f6ed from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 10 20:01:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:42.192 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/26729739-a300-43fe-8678-5294ed41f6ed -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a6b7809c80638f6e016296d2f243706fded356213cefb5a3f70c31b120afa2c9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 10 20:01:42 compute-0 ovn_controller[97701]: 2025-12-10T20:01:42Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:ad:37 192.168.0.54
Dec 10 20:01:42 compute-0 ovn_controller[97701]: 2025-12-10T20:01:42Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:ad:37 192.168.0.54
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.353 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Wed, 10 Dec 2025 20:01:42 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-0558b341-5796-4d9f-85a5-35bfcf1abe1d x-openstack-request-id: req-0558b341-5796-4d9f-85a5-35bfcf1abe1d _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.354 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "26729739-a300-43fe-8678-5294ed41f6ed", "name": "vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7", "status": "ACTIVE", "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "user_id": "2143e69e49fd49db99c8737c973c1ea5", "metadata": {"metering.server_group": "9d7a68be-d216-4b06-b611-878d356c6d68"}, "hostId": "dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852", "image": {"id": "06e6231d-0a77-4b09-acb3-e7faf5a777be", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/06e6231d-0a77-4b09-acb3-e7faf5a777be"}]}, "flavor": {"id": "0fc2e5b1-b522-4c52-bdef-97db09e458e4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/0fc2e5b1-b522-4c52-bdef-97db09e458e4"}]}, "created": "2025-12-10T20:01:00Z", "updated": "2025-12-10T20:01:09Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.54", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0b:ad:37"}, {"version": 4, "addr": "192.168.122.199", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0b:ad:37"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/26729739-a300-43fe-8678-5294ed41f6ed"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/26729739-a300-43fe-8678-5294ed41f6ed"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-10T20:01:09.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.354 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/26729739-a300-43fe-8678-5294ed41f6ed used request id req-0558b341-5796-4d9f-85a5-35bfcf1abe1d request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.357 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '26729739-a300-43fe-8678-5294ed41f6ed', 'name': 'vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {'metering.server_group': '9d7a68be-d216-4b06-b611-878d356c6d68'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.357 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.358 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T20:01:44.358062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.360 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.360 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T20:01:44.361246) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.386 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.386 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.387 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.414 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.414 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.415 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.443 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.444 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.444 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.466 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.467 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.467 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.468 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.468 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.468 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.469 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.469 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.470 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T20:01:44.468689) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.471 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T20:01:44.470033) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.474 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.477 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.480 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.packets volume: 59 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.484 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 26729739-a300-43fe-8678-5294ed41f6ed / tap0785494f-98 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.484 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.484 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.484 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.485 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.485 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.485 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.486 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T20:01:44.485183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.486 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.486 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.486 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.486 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.486 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.487 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.487 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.487 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.487 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.487 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.487 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T20:01:44.487043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.488 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.488 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.488 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.488 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.488 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.489 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.489 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.bytes volume: 7478 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.489 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.bytes volume: 1421 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T20:01:44.488792) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.490 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.490 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.490 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.490 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.490 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.490 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.490 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.bytes.delta volume: 705 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.491 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.491 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.491 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.491 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.491 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.491 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.492 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T20:01:44.490379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T20:01:44.491999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.512 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/memory.usage volume: 48.76953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.532 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.559 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/memory.usage volume: 48.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.582 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/memory.usage volume: 33.30859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.584 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.585 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.585 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7>]
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.586 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-10T20:01:44.584822) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.586 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T20:01:44.587425) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.587 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.588 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.bytes volume: 1738 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.589 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.bytes volume: 8574 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.589 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.590 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T20:01:44.591420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.592 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.592 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.593 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.593 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.593 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.594 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.594 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.595 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.595 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.596 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.596 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.597 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.598 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.598 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.599 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.599 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.599 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T20:01:44.599700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.600 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.600 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.601 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.packets volume: 65 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.601 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.603 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.603 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T20:01:44.603847) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.604 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.604 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.605 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.605 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.607 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.608 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.608 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.609 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.609 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.610 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.610 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.611 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T20:01:44.608664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.612 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.613 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T20:01:44.614201) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.697 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.698 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.698 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.771 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.772 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.772 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.831 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.832 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.832 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.887 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.887 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.890 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.891 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.891 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.891 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.891 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.891 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.891 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/cpu volume: 39800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.892 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/cpu volume: 30810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.892 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/cpu volume: 244020000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.892 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/cpu volume: 32460000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.893 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.893 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.893 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.893 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T20:01:44.891629) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.893 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.893 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.894 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 425951231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.894 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 63853652 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.894 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 49706577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.894 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.latency volume: 398719696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.894 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.latency volume: 103443581 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.895 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.latency volume: 86126104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.895 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.latency volume: 365261803 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T20:01:44.893885) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.895 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.latency volume: 76908904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.895 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.latency volume: 59898361 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.896 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.latency volume: 395037622 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.896 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.latency volume: 62323348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.896 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.latency volume: 49949275 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.896 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.897 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.897 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.897 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.897 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.897 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.897 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.898 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T20:01:44.897389) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.898 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.898 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.898 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.898 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.898 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.899 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.899 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.899 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.899 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.899 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.900 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.900 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.900 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.901 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.901 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T20:01:44.900757) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.901 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.901 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.901 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.902 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.902 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.902 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.902 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.902 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.902 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.903 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.903 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.903 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.904 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.904 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.904 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.904 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.904 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.904 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.905 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.905 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.905 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.905 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T20:01:44.904201) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.906 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.906 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.906 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.bytes volume: 41590784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.906 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.906 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.907 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.908 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.908 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.908 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.908 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.908 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.909 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.909 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.909 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.909 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.909 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T20:01:44.908083) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.909 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 816753194 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.910 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 10242364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.910 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.910 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.latency volume: 1415205887 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.910 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T20:01:44.909874) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.910 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.latency volume: 13598806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.911 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.911 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.latency volume: 1285515032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.911 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.latency volume: 10530105 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.911 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.911 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.latency volume: 1540679388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.912 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.latency volume: 11471208 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.912 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.912 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.912 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.913 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.913 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.913 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.913 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.913 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.913 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.913 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.914 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.914 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.914 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.914 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.914 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T20:01:44.913222) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.915 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.915 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.requests volume: 219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.915 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.915 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.916 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.916 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.916 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.916 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.916 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.917 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T20:01:44.916859) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.917 14 DEBUG ceilometer.compute.pollsters [-] ac2c8050-72b5-419c-ba99-c4feeb26147a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.917 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.918 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.918 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.918 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.918 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.918 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7>]
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.919 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.919 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.919 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.919 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-10T20:01:44.918651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:01:44.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:01:46 compute-0 podman[243228]: 2025-12-10 20:01:46.111512972 +0000 UTC m=+0.093259738 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-type=git, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Dec 10 20:01:46 compute-0 nova_compute[189279]: 2025-12-10 20:01:46.582 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:47 compute-0 nova_compute[189279]: 2025-12-10 20:01:47.015 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:48 compute-0 podman[243248]: 2025-12-10 20:01:48.083204752 +0000 UTC m=+0.063844516 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 20:01:51 compute-0 nova_compute[189279]: 2025-12-10 20:01:51.584 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:52 compute-0 nova_compute[189279]: 2025-12-10 20:01:52.018 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:54 compute-0 podman[243272]: 2025-12-10 20:01:54.10808839 +0000 UTC m=+0.088476660 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 10 20:01:54 compute-0 podman[243274]: 2025-12-10 20:01:54.119848996 +0000 UTC m=+0.093436883 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, release-0.7.12=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_id=edpm, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, io.buildah.version=1.29.0, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543)
Dec 10 20:01:54 compute-0 podman[243273]: 2025-12-10 20:01:54.120633507 +0000 UTC m=+0.104507631 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 10 20:01:55 compute-0 nova_compute[189279]: 2025-12-10 20:01:55.974 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:01:55 compute-0 nova_compute[189279]: 2025-12-10 20:01:55.975 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:01:55 compute-0 nova_compute[189279]: 2025-12-10 20:01:55.975 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:01:56 compute-0 nova_compute[189279]: 2025-12-10 20:01:56.350 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:01:56 compute-0 nova_compute[189279]: 2025-12-10 20:01:56.351 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:01:56 compute-0 nova_compute[189279]: 2025-12-10 20:01:56.351 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:01:56 compute-0 nova_compute[189279]: 2025-12-10 20:01:56.586 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.019 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.338 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Updating instance_info_cache with network_info: [{"id": "5d3f5317-707c-4080-a612-71018c7ba2ed", "address": "fa:16:3e:af:37:97", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d3f5317-70", "ovs_interfaceid": "5d3f5317-707c-4080-a612-71018c7ba2ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.355 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.357 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.359 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.360 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.361 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.362 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.363 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.364 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.364 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.365 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.395 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.395 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.395 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.395 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.503 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.588 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.589 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.654 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.655 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.731 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.732 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.800 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.810 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.885 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.887 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.967 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:57 compute-0 nova_compute[189279]: 2025-12-10 20:01:57.968 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.032 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.034 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.101 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.107 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.170 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.171 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.232 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.233 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.295 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.297 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.358 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.370 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.431 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.432 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.490 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.491 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.552 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.552 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:01:58 compute-0 nova_compute[189279]: 2025-12-10 20:01:58.619 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:01:59 compute-0 nova_compute[189279]: 2025-12-10 20:01:59.043 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:01:59 compute-0 nova_compute[189279]: 2025-12-10 20:01:59.045 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4617MB free_disk=72.30836868286133GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:01:59 compute-0 nova_compute[189279]: 2025-12-10 20:01:59.046 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:01:59 compute-0 nova_compute[189279]: 2025-12-10 20:01:59.046 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:01:59 compute-0 nova_compute[189279]: 2025-12-10 20:01:59.196 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:01:59 compute-0 nova_compute[189279]: 2025-12-10 20:01:59.197 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ac2c8050-72b5-419c-ba99-c4feeb26147a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:01:59 compute-0 nova_compute[189279]: 2025-12-10 20:01:59.197 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 1fbc523f-accf-4848-80b7-6d997e0c65bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:01:59 compute-0 nova_compute[189279]: 2025-12-10 20:01:59.197 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 26729739-a300-43fe-8678-5294ed41f6ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:01:59 compute-0 nova_compute[189279]: 2025-12-10 20:01:59.198 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:01:59 compute-0 nova_compute[189279]: 2025-12-10 20:01:59.199 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:01:59 compute-0 nova_compute[189279]: 2025-12-10 20:01:59.292 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:01:59 compute-0 nova_compute[189279]: 2025-12-10 20:01:59.338 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:01:59 compute-0 nova_compute[189279]: 2025-12-10 20:01:59.373 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:01:59 compute-0 nova_compute[189279]: 2025-12-10 20:01:59.373 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:01:59 compute-0 podman[203484]: time="2025-12-10T20:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:01:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:01:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec 10 20:02:00 compute-0 podman[243379]: 2025-12-10 20:02:00.113020442 +0000 UTC m=+0.078323976 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:02:00 compute-0 podman[243378]: 2025-12-10 20:02:00.128111658 +0000 UTC m=+0.103330089 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 10 20:02:01 compute-0 openstack_network_exporter[205632]: ERROR   20:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:02:01 compute-0 openstack_network_exporter[205632]: ERROR   20:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:02:01 compute-0 openstack_network_exporter[205632]: ERROR   20:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:02:01 compute-0 openstack_network_exporter[205632]: ERROR   20:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:02:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:02:01 compute-0 openstack_network_exporter[205632]: ERROR   20:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:02:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:02:01 compute-0 nova_compute[189279]: 2025-12-10 20:02:01.590 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:02 compute-0 nova_compute[189279]: 2025-12-10 20:02:02.022 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:04 compute-0 podman[243419]: 2025-12-10 20:02:04.164505952 +0000 UTC m=+0.138813223 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Dec 10 20:02:06 compute-0 nova_compute[189279]: 2025-12-10 20:02:06.592 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:07 compute-0 nova_compute[189279]: 2025-12-10 20:02:07.024 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:10 compute-0 podman[243444]: 2025-12-10 20:02:10.101969407 +0000 UTC m=+0.074162014 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.build-date=20251210, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4)
Dec 10 20:02:11 compute-0 nova_compute[189279]: 2025-12-10 20:02:11.594 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:12 compute-0 nova_compute[189279]: 2025-12-10 20:02:12.026 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:16 compute-0 nova_compute[189279]: 2025-12-10 20:02:16.598 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:17 compute-0 nova_compute[189279]: 2025-12-10 20:02:17.029 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:17 compute-0 podman[243462]: 2025-12-10 20:02:17.112933912 +0000 UTC m=+0.089247180 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 20:02:19 compute-0 podman[243483]: 2025-12-10 20:02:19.115135084 +0000 UTC m=+0.087302158 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:02:21 compute-0 nova_compute[189279]: 2025-12-10 20:02:21.602 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:22 compute-0 nova_compute[189279]: 2025-12-10 20:02:22.031 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:23.378 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:02:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:23.378 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:02:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:23.379 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:02:25 compute-0 podman[243509]: 2025-12-10 20:02:25.108866792 +0000 UTC m=+0.085062847 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 10 20:02:25 compute-0 podman[243511]: 2025-12-10 20:02:25.120902986 +0000 UTC m=+0.088465359 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, version=9.4, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, container_name=kepler, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, config_id=edpm, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Dec 10 20:02:25 compute-0 podman[243510]: 2025-12-10 20:02:25.137259136 +0000 UTC m=+0.109354321 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 10 20:02:26 compute-0 nova_compute[189279]: 2025-12-10 20:02:26.605 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:27 compute-0 nova_compute[189279]: 2025-12-10 20:02:27.036 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:29 compute-0 podman[203484]: time="2025-12-10T20:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:02:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:02:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Dec 10 20:02:31 compute-0 podman[243566]: 2025-12-10 20:02:31.120526909 +0000 UTC m=+0.092308703 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 10 20:02:31 compute-0 podman[243567]: 2025-12-10 20:02:31.138529263 +0000 UTC m=+0.108548150 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 20:02:31 compute-0 openstack_network_exporter[205632]: ERROR   20:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:02:31 compute-0 openstack_network_exporter[205632]: ERROR   20:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:02:31 compute-0 openstack_network_exporter[205632]: ERROR   20:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:02:31 compute-0 openstack_network_exporter[205632]: ERROR   20:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:02:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:02:31 compute-0 openstack_network_exporter[205632]: ERROR   20:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:02:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:02:31 compute-0 nova_compute[189279]: 2025-12-10 20:02:31.608 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:32 compute-0 nova_compute[189279]: 2025-12-10 20:02:32.040 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:35 compute-0 podman[243609]: 2025-12-10 20:02:35.157111916 +0000 UTC m=+0.133454638 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 10 20:02:36 compute-0 nova_compute[189279]: 2025-12-10 20:02:36.611 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:37 compute-0 nova_compute[189279]: 2025-12-10 20:02:37.043 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:41 compute-0 podman[243635]: 2025-12-10 20:02:41.169177657 +0000 UTC m=+0.135852742 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251210, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 10 20:02:41 compute-0 nova_compute[189279]: 2025-12-10 20:02:41.612 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:42 compute-0 nova_compute[189279]: 2025-12-10 20:02:42.045 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:46 compute-0 nova_compute[189279]: 2025-12-10 20:02:46.616 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:47 compute-0 nova_compute[189279]: 2025-12-10 20:02:47.050 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:48 compute-0 podman[243655]: 2025-12-10 20:02:48.146100236 +0000 UTC m=+0.112573307 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Dec 10 20:02:50 compute-0 podman[243678]: 2025-12-10 20:02:50.099790944 +0000 UTC m=+0.081004159 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:02:51 compute-0 nova_compute[189279]: 2025-12-10 20:02:51.620 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:52 compute-0 nova_compute[189279]: 2025-12-10 20:02:52.053 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:53 compute-0 nova_compute[189279]: 2025-12-10 20:02:53.804 189283 DEBUG oslo_concurrency.lockutils [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "ac2c8050-72b5-419c-ba99-c4feeb26147a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:02:53 compute-0 nova_compute[189279]: 2025-12-10 20:02:53.805 189283 DEBUG oslo_concurrency.lockutils [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:02:53 compute-0 nova_compute[189279]: 2025-12-10 20:02:53.806 189283 DEBUG oslo_concurrency.lockutils [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:02:53 compute-0 nova_compute[189279]: 2025-12-10 20:02:53.806 189283 DEBUG oslo_concurrency.lockutils [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:02:53 compute-0 nova_compute[189279]: 2025-12-10 20:02:53.806 189283 DEBUG oslo_concurrency.lockutils [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:02:53 compute-0 nova_compute[189279]: 2025-12-10 20:02:53.808 189283 INFO nova.compute.manager [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Terminating instance
Dec 10 20:02:53 compute-0 nova_compute[189279]: 2025-12-10 20:02:53.809 189283 DEBUG nova.compute.manager [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:02:53 compute-0 kernel: tap5d3f5317-70 (unregistering): left promiscuous mode
Dec 10 20:02:53 compute-0 NetworkManager[56238]: <info>  [1765396973.8499] device (tap5d3f5317-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:02:53 compute-0 nova_compute[189279]: 2025-12-10 20:02:53.857 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:53 compute-0 ovn_controller[97701]: 2025-12-10T20:02:53Z|00057|binding|INFO|Releasing lport 5d3f5317-707c-4080-a612-71018c7ba2ed from this chassis (sb_readonly=0)
Dec 10 20:02:53 compute-0 ovn_controller[97701]: 2025-12-10T20:02:53Z|00058|binding|INFO|Setting lport 5d3f5317-707c-4080-a612-71018c7ba2ed down in Southbound
Dec 10 20:02:53 compute-0 ovn_controller[97701]: 2025-12-10T20:02:53Z|00059|binding|INFO|Removing iface tap5d3f5317-70 ovn-installed in OVS
Dec 10 20:02:53 compute-0 nova_compute[189279]: 2025-12-10 20:02:53.861 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:53 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:53.867 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:37:97 192.168.0.123'], port_security=['fa:16:3e:af:37:97 192.168.0.123'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-pjemjxzxegr5-w43iflqhcsjr-gtk4633myb43-port-tdaih7wc5ctt', 'neutron:cidrs': '192.168.0.123/24', 'neutron:device_id': 'ac2c8050-72b5-419c-ba99-c4feeb26147a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-pjemjxzxegr5-w43iflqhcsjr-gtk4633myb43-port-tdaih7wc5ctt', 'neutron:project_id': 'fe518ea62a94467e823b2b1046c57a2e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbfca0ae-785e-46d9-973d-467188fc7f6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a76816-4d6e-46b1-bfa0-919a54b6e056, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=5d3f5317-707c-4080-a612-71018c7ba2ed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:02:53 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:53.869 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 5d3f5317-707c-4080-a612-71018c7ba2ed in datapath e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 unbound from our chassis
Dec 10 20:02:53 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:53.870 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e55a1ff5-f742-4bad-ae9c-2f6d4795fa29
Dec 10 20:02:53 compute-0 nova_compute[189279]: 2025-12-10 20:02:53.873 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:53 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:53.891 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb4de09-0792-4ae9-b72f-f8a9b73db5c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:02:53 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec 10 20:02:53 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 4min 57.175s CPU time.
Dec 10 20:02:53 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:53.926 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[a5553eac-6367-4d42-b809-2463e506ee75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:02:53 compute-0 systemd-machined[155642]: Machine qemu-2-instance-00000002 terminated.
Dec 10 20:02:53 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:53.930 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[ab25334a-70f8-43a5-b08f-25e7cec6df41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:02:53 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:53.966 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[54d3a0f8-7093-4f59-8085-f6db8fbc39de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:02:53 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:53.990 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff4abed-f659-432c-9c1d-1f7a26e2185c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape55a1ff5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:f6:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 15, 'rx_bytes': 658, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 15, 'rx_bytes': 658, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 372629, 'reachable_time': 33508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243714, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:02:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:54.016 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e934a01f-8bfa-4abf-86df-5add91a5642d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372645, 'tstamp': 372645}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243715, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372649, 'tstamp': 372649}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243715, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:02:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:54.018 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape55a1ff5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.019 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.027 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:54.027 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape55a1ff5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:02:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:54.027 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:02:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:54.028 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape55a1ff5-f0, col_values=(('external_ids', {'iface-id': 'f70c9140-d0bb-473b-94ef-0336fe52cbb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:02:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:54.028 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.112 189283 INFO nova.virt.libvirt.driver [-] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Instance destroyed successfully.
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.113 189283 DEBUG nova.objects.instance [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'resources' on Instance uuid ac2c8050-72b5-419c-ba99-c4feeb26147a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.125 189283 DEBUG nova.virt.libvirt.vif [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T19:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xzxegr5-w43iflqhcsjr-gtk4633myb43-vnf-7z4ydfelvlzf',id=2,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-10T19:54:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='9d7a68be-d216-4b06-b611-878d356c6d68'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-fjjhmsat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T19:54:17Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zODE1ODg1MTU4NTIxMDUyNzUzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM4MTU4ODUxNTg1MjEwNTI3NTM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzgxNTg4NTE1ODUyMTA1Mjc1Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM4MTU4ODUxNTg1MjEwNTI3NTM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zODE1ODg1MTU4NTIxMDUyNzUzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zODE1ODg1MTU4NTIxMDUyNzUzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Dec 10 20:02:54 compute-0 nova_compute[189279]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzgxNTg4NTE1ODUyMTA1Mjc1Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM4MTU4ODUxNTg1MjEwNTI3NTM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zODE1ODg1MTU4NTIxMDUyNzUzPT0tLQo=',user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=ac2c8050-72b5-419c-ba99-c4feeb26147a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d3f5317-707c-4080-a612-71018c7ba2ed", "address": "fa:16:3e:af:37:97", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d3f5317-70", "ovs_interfaceid": "5d3f5317-707c-4080-a612-71018c7ba2ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.125 189283 DEBUG nova.network.os_vif_util [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "5d3f5317-707c-4080-a612-71018c7ba2ed", "address": "fa:16:3e:af:37:97", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d3f5317-70", "ovs_interfaceid": "5d3f5317-707c-4080-a612-71018c7ba2ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.126 189283 DEBUG nova.network.os_vif_util [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:37:97,bridge_name='br-int',has_traffic_filtering=True,id=5d3f5317-707c-4080-a612-71018c7ba2ed,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5d3f5317-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.126 189283 DEBUG os_vif [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:37:97,bridge_name='br-int',has_traffic_filtering=True,id=5d3f5317-707c-4080-a612-71018c7ba2ed,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5d3f5317-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.128 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.128 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d3f5317-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.130 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.132 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.135 189283 INFO os_vif [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:37:97,bridge_name='br-int',has_traffic_filtering=True,id=5d3f5317-707c-4080-a612-71018c7ba2ed,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5d3f5317-70')
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.136 189283 INFO nova.virt.libvirt.driver [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Deleting instance files /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a_del
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.137 189283 INFO nova.virt.libvirt.driver [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Deletion of /var/lib/nova/instances/ac2c8050-72b5-419c-ba99-c4feeb26147a_del complete
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.193 189283 INFO nova.compute.manager [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Took 0.38 seconds to destroy the instance on the hypervisor.
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.194 189283 DEBUG oslo.service.loopingcall [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.195 189283 DEBUG nova.compute.manager [-] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.195 189283 DEBUG nova.network.neutron [-] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:02:54 compute-0 rsyslogd[236537]: message too long (8192) with configured size 8096, begin of message is: 2025-12-10 20:02:54.125 189283 DEBUG nova.virt.libvirt.vif [None req-54096baf-00 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.508 189283 DEBUG nova.compute.manager [req-053d752c-aa15-43a8-be87-0dafee32ec68 req-3276b77e-82b7-49dd-847c-b25d28058a11 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Received event network-vif-unplugged-5d3f5317-707c-4080-a612-71018c7ba2ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.509 189283 DEBUG oslo_concurrency.lockutils [req-053d752c-aa15-43a8-be87-0dafee32ec68 req-3276b77e-82b7-49dd-847c-b25d28058a11 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.509 189283 DEBUG oslo_concurrency.lockutils [req-053d752c-aa15-43a8-be87-0dafee32ec68 req-3276b77e-82b7-49dd-847c-b25d28058a11 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.509 189283 DEBUG oslo_concurrency.lockutils [req-053d752c-aa15-43a8-be87-0dafee32ec68 req-3276b77e-82b7-49dd-847c-b25d28058a11 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.510 189283 DEBUG nova.compute.manager [req-053d752c-aa15-43a8-be87-0dafee32ec68 req-3276b77e-82b7-49dd-847c-b25d28058a11 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] No waiting events found dispatching network-vif-unplugged-5d3f5317-707c-4080-a612-71018c7ba2ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.510 189283 DEBUG nova.compute.manager [req-053d752c-aa15-43a8-be87-0dafee32ec68 req-3276b77e-82b7-49dd-847c-b25d28058a11 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Received event network-vif-unplugged-5d3f5317-707c-4080-a612-71018c7ba2ed for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 10 20:02:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:54.625 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:02:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:02:54.627 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 20:02:54 compute-0 nova_compute[189279]: 2025-12-10 20:02:54.628 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:55 compute-0 nova_compute[189279]: 2025-12-10 20:02:55.640 189283 DEBUG nova.network.neutron [-] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:02:55 compute-0 nova_compute[189279]: 2025-12-10 20:02:55.658 189283 INFO nova.compute.manager [-] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Took 1.46 seconds to deallocate network for instance.
Dec 10 20:02:55 compute-0 nova_compute[189279]: 2025-12-10 20:02:55.692 189283 DEBUG oslo_concurrency.lockutils [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:02:55 compute-0 nova_compute[189279]: 2025-12-10 20:02:55.693 189283 DEBUG oslo_concurrency.lockutils [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:02:55 compute-0 nova_compute[189279]: 2025-12-10 20:02:55.825 189283 DEBUG nova.compute.provider_tree [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:02:55 compute-0 nova_compute[189279]: 2025-12-10 20:02:55.843 189283 DEBUG nova.scheduler.client.report [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:02:55 compute-0 nova_compute[189279]: 2025-12-10 20:02:55.864 189283 DEBUG oslo_concurrency.lockutils [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:02:55 compute-0 nova_compute[189279]: 2025-12-10 20:02:55.880 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:02:55 compute-0 nova_compute[189279]: 2025-12-10 20:02:55.881 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:02:55 compute-0 nova_compute[189279]: 2025-12-10 20:02:55.888 189283 INFO nova.scheduler.client.report [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Deleted allocations for instance ac2c8050-72b5-419c-ba99-c4feeb26147a
Dec 10 20:02:55 compute-0 nova_compute[189279]: 2025-12-10 20:02:55.914 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:02:55 compute-0 nova_compute[189279]: 2025-12-10 20:02:55.915 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:02:55 compute-0 nova_compute[189279]: 2025-12-10 20:02:55.979 189283 DEBUG oslo_concurrency.lockutils [None req-54096baf-0053-40f6-8eac-cea7c4c43ff7 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:02:56 compute-0 podman[243739]: 2025-12-10 20:02:56.08781603 +0000 UTC m=+0.069644984 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm)
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.099 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-1fbc523f-accf-4848-80b7-6d997e0c65bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.099 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-1fbc523f-accf-4848-80b7-6d997e0c65bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.099 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:02:56 compute-0 podman[243738]: 2025-12-10 20:02:56.111969599 +0000 UTC m=+0.095146429 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:02:56 compute-0 podman[243740]: 2025-12-10 20:02:56.139874159 +0000 UTC m=+0.108433935 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release-0.7.12=, vendor=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9)
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.623 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.673 189283 DEBUG nova.compute.manager [req-1e9b8567-b8f0-47ca-b014-e9b7335e7f6f req-d63d6b89-8f82-4c65-996e-cc266bed2622 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Received event network-vif-plugged-5d3f5317-707c-4080-a612-71018c7ba2ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.673 189283 DEBUG oslo_concurrency.lockutils [req-1e9b8567-b8f0-47ca-b014-e9b7335e7f6f req-d63d6b89-8f82-4c65-996e-cc266bed2622 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.673 189283 DEBUG oslo_concurrency.lockutils [req-1e9b8567-b8f0-47ca-b014-e9b7335e7f6f req-d63d6b89-8f82-4c65-996e-cc266bed2622 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.674 189283 DEBUG oslo_concurrency.lockutils [req-1e9b8567-b8f0-47ca-b014-e9b7335e7f6f req-d63d6b89-8f82-4c65-996e-cc266bed2622 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ac2c8050-72b5-419c-ba99-c4feeb26147a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.674 189283 DEBUG nova.compute.manager [req-1e9b8567-b8f0-47ca-b014-e9b7335e7f6f req-d63d6b89-8f82-4c65-996e-cc266bed2622 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] No waiting events found dispatching network-vif-plugged-5d3f5317-707c-4080-a612-71018c7ba2ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.674 189283 WARNING nova.compute.manager [req-1e9b8567-b8f0-47ca-b014-e9b7335e7f6f req-d63d6b89-8f82-4c65-996e-cc266bed2622 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Received unexpected event network-vif-plugged-5d3f5317-707c-4080-a612-71018c7ba2ed for instance with vm_state deleted and task_state None.
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.674 189283 DEBUG nova.compute.manager [req-1e9b8567-b8f0-47ca-b014-e9b7335e7f6f req-d63d6b89-8f82-4c65-996e-cc266bed2622 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Received event network-changed-5d3f5317-707c-4080-a612-71018c7ba2ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.674 189283 DEBUG nova.compute.manager [req-1e9b8567-b8f0-47ca-b014-e9b7335e7f6f req-d63d6b89-8f82-4c65-996e-cc266bed2622 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Refreshing instance network info cache due to event network-changed-5d3f5317-707c-4080-a612-71018c7ba2ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.675 189283 DEBUG oslo_concurrency.lockutils [req-1e9b8567-b8f0-47ca-b014-e9b7335e7f6f req-d63d6b89-8f82-4c65-996e-cc266bed2622 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.675 189283 DEBUG oslo_concurrency.lockutils [req-1e9b8567-b8f0-47ca-b014-e9b7335e7f6f req-d63d6b89-8f82-4c65-996e-cc266bed2622 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.675 189283 DEBUG nova.network.neutron [req-1e9b8567-b8f0-47ca-b014-e9b7335e7f6f req-d63d6b89-8f82-4c65-996e-cc266bed2622 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Refreshing network info cache for port 5d3f5317-707c-4080-a612-71018c7ba2ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:02:56 compute-0 nova_compute[189279]: 2025-12-10 20:02:56.825 189283 DEBUG nova.network.neutron [req-1e9b8567-b8f0-47ca-b014-e9b7335e7f6f req-d63d6b89-8f82-4c65-996e-cc266bed2622 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.378 189283 DEBUG nova.network.neutron [req-1e9b8567-b8f0-47ca-b014-e9b7335e7f6f req-d63d6b89-8f82-4c65-996e-cc266bed2622 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.380 189283 DEBUG oslo_concurrency.lockutils [req-1e9b8567-b8f0-47ca-b014-e9b7335e7f6f req-d63d6b89-8f82-4c65-996e-cc266bed2622 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-ac2c8050-72b5-419c-ba99-c4feeb26147a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.646 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Updating instance_info_cache with network_info: [{"id": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "address": "fa:16:3e:85:db:85", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4b01034-4b", "ovs_interfaceid": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.674 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-1fbc523f-accf-4848-80b7-6d997e0c65bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.674 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.674 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.675 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.675 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.675 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.675 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.675 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.675 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.676 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.699 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.700 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.700 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.701 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.828 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.902 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.904 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.966 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:02:57 compute-0 nova_compute[189279]: 2025-12-10 20:02:57.967 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.031 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.032 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.095 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.103 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.163 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.164 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.227 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.229 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.290 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.291 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.359 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.372 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.477 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.479 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.547 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.550 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.624 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.626 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:02:58 compute-0 nova_compute[189279]: 2025-12-10 20:02:58.713 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:02:59 compute-0 nova_compute[189279]: 2025-12-10 20:02:59.102 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:02:59 compute-0 nova_compute[189279]: 2025-12-10 20:02:59.103 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4789MB free_disk=72.33085250854492GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:02:59 compute-0 nova_compute[189279]: 2025-12-10 20:02:59.104 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:02:59 compute-0 nova_compute[189279]: 2025-12-10 20:02:59.104 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:02:59 compute-0 nova_compute[189279]: 2025-12-10 20:02:59.130 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:02:59 compute-0 nova_compute[189279]: 2025-12-10 20:02:59.186 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:02:59 compute-0 nova_compute[189279]: 2025-12-10 20:02:59.186 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 1fbc523f-accf-4848-80b7-6d997e0c65bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:02:59 compute-0 nova_compute[189279]: 2025-12-10 20:02:59.186 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 26729739-a300-43fe-8678-5294ed41f6ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:02:59 compute-0 nova_compute[189279]: 2025-12-10 20:02:59.186 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:02:59 compute-0 nova_compute[189279]: 2025-12-10 20:02:59.187 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:02:59 compute-0 nova_compute[189279]: 2025-12-10 20:02:59.261 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:02:59 compute-0 nova_compute[189279]: 2025-12-10 20:02:59.276 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:02:59 compute-0 nova_compute[189279]: 2025-12-10 20:02:59.298 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:02:59 compute-0 nova_compute[189279]: 2025-12-10 20:02:59.298 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:02:59 compute-0 podman[203484]: time="2025-12-10T20:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:02:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:02:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec 10 20:03:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:03:00.630 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:03:01 compute-0 openstack_network_exporter[205632]: ERROR   20:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:03:01 compute-0 openstack_network_exporter[205632]: ERROR   20:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:03:01 compute-0 openstack_network_exporter[205632]: ERROR   20:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:03:01 compute-0 openstack_network_exporter[205632]: ERROR   20:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:03:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:03:01 compute-0 openstack_network_exporter[205632]: ERROR   20:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:03:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:03:01 compute-0 nova_compute[189279]: 2025-12-10 20:03:01.627 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:02 compute-0 podman[243830]: 2025-12-10 20:03:02.12172264 +0000 UTC m=+0.091702446 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 10 20:03:02 compute-0 podman[243831]: 2025-12-10 20:03:02.123368524 +0000 UTC m=+0.094549152 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:03:04 compute-0 nova_compute[189279]: 2025-12-10 20:03:04.134 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:06 compute-0 nova_compute[189279]: 2025-12-10 20:03:06.630 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:06 compute-0 podman[243873]: 2025-12-10 20:03:06.661171356 +0000 UTC m=+0.110801579 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:03:09 compute-0 nova_compute[189279]: 2025-12-10 20:03:09.109 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765396974.1081042, ac2c8050-72b5-419c-ba99-c4feeb26147a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:03:09 compute-0 nova_compute[189279]: 2025-12-10 20:03:09.110 189283 INFO nova.compute.manager [-] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] VM Stopped (Lifecycle Event)
Dec 10 20:03:09 compute-0 nova_compute[189279]: 2025-12-10 20:03:09.128 189283 DEBUG nova.compute.manager [None req-697e8f1f-dd30-4570-962f-8f75bbb75ee5 - - - - - -] [instance: ac2c8050-72b5-419c-ba99-c4feeb26147a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:03:09 compute-0 nova_compute[189279]: 2025-12-10 20:03:09.137 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:10 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 10 20:03:11 compute-0 nova_compute[189279]: 2025-12-10 20:03:11.632 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:12 compute-0 podman[243901]: 2025-12-10 20:03:12.146524453 +0000 UTC m=+0.123220563 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 10 20:03:14 compute-0 nova_compute[189279]: 2025-12-10 20:03:14.138 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:16 compute-0 nova_compute[189279]: 2025-12-10 20:03:16.635 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:19 compute-0 podman[243919]: 2025-12-10 20:03:19.116739679 +0000 UTC m=+0.093995190 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_id=edpm, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 10 20:03:19 compute-0 nova_compute[189279]: 2025-12-10 20:03:19.141 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:21 compute-0 podman[243941]: 2025-12-10 20:03:21.082684142 +0000 UTC m=+0.064591808 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:03:21 compute-0 nova_compute[189279]: 2025-12-10 20:03:21.639 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:03:23.378 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:03:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:03:23.379 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:03:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:03:23.380 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:03:24 compute-0 nova_compute[189279]: 2025-12-10 20:03:24.142 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:26 compute-0 nova_compute[189279]: 2025-12-10 20:03:26.642 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:26 compute-0 ovn_controller[97701]: 2025-12-10T20:03:26Z|00060|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 10 20:03:27 compute-0 podman[243965]: 2025-12-10 20:03:27.097777429 +0000 UTC m=+0.072587444 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 10 20:03:27 compute-0 podman[243967]: 2025-12-10 20:03:27.151006451 +0000 UTC m=+0.117017710 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release=1214.1726694543, version=9.4, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Dec 10 20:03:27 compute-0 podman[243966]: 2025-12-10 20:03:27.161977755 +0000 UTC m=+0.119784543 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:03:29 compute-0 nova_compute[189279]: 2025-12-10 20:03:29.146 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:29 compute-0 podman[203484]: time="2025-12-10T20:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:03:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:03:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec 10 20:03:31 compute-0 openstack_network_exporter[205632]: ERROR   20:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:03:31 compute-0 openstack_network_exporter[205632]: ERROR   20:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:03:31 compute-0 openstack_network_exporter[205632]: ERROR   20:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:03:31 compute-0 openstack_network_exporter[205632]: ERROR   20:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:03:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:03:31 compute-0 openstack_network_exporter[205632]: ERROR   20:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:03:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:03:31 compute-0 nova_compute[189279]: 2025-12-10 20:03:31.644 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:33 compute-0 podman[244020]: 2025-12-10 20:03:33.083312949 +0000 UTC m=+0.061922167 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:03:33 compute-0 podman[244019]: 2025-12-10 20:03:33.105013463 +0000 UTC m=+0.088294727 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 10 20:03:34 compute-0 nova_compute[189279]: 2025-12-10 20:03:34.149 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:36 compute-0 nova_compute[189279]: 2025-12-10 20:03:36.646 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:37 compute-0 podman[244059]: 2025-12-10 20:03:37.205056435 +0000 UTC m=+0.185005749 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 10 20:03:39 compute-0 nova_compute[189279]: 2025-12-10 20:03:39.152 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:41 compute-0 nova_compute[189279]: 2025-12-10 20:03:41.648 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.175 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.175 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.175 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa19c7260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.183 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '12986b74-7b15-4ff4-9019-081950660d4b', 'name': 'test_0', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.188 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1fbc523f-accf-4848-80b7-6d997e0c65bf', 'name': 'vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {'metering.server_group': '9d7a68be-d216-4b06-b611-878d356c6d68'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.193 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '26729739-a300-43fe-8678-5294ed41f6ed', 'name': 'vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {'metering.server_group': '9d7a68be-d216-4b06-b611-878d356c6d68'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.193 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.193 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.194 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.194 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.195 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.196 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T20:03:42.194239) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.197 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.197 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T20:03:42.197745) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.226 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.226 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.227 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.255 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.256 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.256 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.286 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.286 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.286 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.287 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.288 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.288 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.288 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.289 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.289 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.289 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.289 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T20:03:42.288432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T20:03:42.289688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.294 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.297 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.300 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.301 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.301 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T20:03:42.301413) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.302 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.302 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.303 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.303 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.304 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.304 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.304 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.305 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.305 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.305 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.305 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.305 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T20:03:42.303535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.306 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T20:03:42.305369) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.306 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.306 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.307 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.307 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.307 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T20:03:42.307461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.308 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.308 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.bytes.delta volume: 795 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.308 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.309 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.309 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T20:03:42.309449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.333 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/memory.usage volume: 48.76953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.359 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.379 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.379 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.380 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.380 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.380 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.380 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.381 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes volume: 2388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.381 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.bytes volume: 1822 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.381 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T20:03:42.380702) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.381 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.382 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.382 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.382 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T20:03:42.382268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.382 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.382 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.383 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.383 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.383 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.383 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.383 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.383 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.384 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.384 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.384 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.384 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.384 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.384 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.384 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.385 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.385 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.385 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.385 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T20:03:42.384781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.386 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.386 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T20:03:42.386226) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.386 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.386 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.387 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.387 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.387 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.387 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.387 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.387 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T20:03:42.387605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.388 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.388 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.388 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.388 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.388 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.388 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.389 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.389 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T20:03:42.388997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.450 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.450 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.451 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.515 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.516 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.516 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.576 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.577 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.578 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.579 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.580 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/cpu volume: 41090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T20:03:42.580230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.581 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/cpu volume: 32130000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.581 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/cpu volume: 33850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.582 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.583 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 425951231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.584 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 63853652 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.584 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 49706577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.585 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.latency volume: 398719696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T20:03:42.583148) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.585 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.latency volume: 103443581 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.586 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.latency volume: 86126104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.586 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.latency volume: 395037622 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.586 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.latency volume: 62323348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.587 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.latency volume: 49949275 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.588 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.589 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.590 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.590 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T20:03:42.589143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.591 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.591 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.592 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.592 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.592 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.593 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.594 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T20:03:42.594944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.595 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.596 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.596 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.597 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.597 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.598 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.598 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.598 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.599 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.600 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.600 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.600 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.600 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.601 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.602 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.602 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.603 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.604 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.604 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.604 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.605 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T20:03:42.601381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.605 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.606 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.607 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.608 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.608 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T20:03:42.608512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.608 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.609 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.610 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.611 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.611 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.611 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.612 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.612 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.612 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.613 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T20:03:42.612811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.613 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 816753194 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.614 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 10242364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.614 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.615 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.latency volume: 1415205887 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.615 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.latency volume: 13598806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.616 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.616 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.latency volume: 1570902949 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.617 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.latency volume: 11471208 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.617 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.618 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.619 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.619 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.619 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.619 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.620 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T20:03:42.619509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.621 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.621 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.621 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.621 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.622 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.622 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.622 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.623 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.623 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.623 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.623 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.624 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T20:03:42.623694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.624 14 DEBUG ceilometer.compute.pollsters [-] 1fbc523f-accf-4848-80b7-6d997e0c65bf/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.624 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.625 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.625 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.625 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.625 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.625 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.625 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.625 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.625 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.626 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.626 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.626 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.626 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.626 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.626 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.626 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.626 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.626 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.626 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.626 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.626 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.626 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:03:42.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:03:43 compute-0 podman[244085]: 2025-12-10 20:03:43.128496382 +0000 UTC m=+0.102001636 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251210, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec 10 20:03:44 compute-0 nova_compute[189279]: 2025-12-10 20:03:44.155 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:46 compute-0 nova_compute[189279]: 2025-12-10 20:03:46.651 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:49 compute-0 nova_compute[189279]: 2025-12-10 20:03:49.157 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:50 compute-0 podman[244104]: 2025-12-10 20:03:50.08428939 +0000 UTC m=+0.065125874 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, release=1755695350, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 10 20:03:51 compute-0 nova_compute[189279]: 2025-12-10 20:03:51.653 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:52 compute-0 podman[244124]: 2025-12-10 20:03:52.123104914 +0000 UTC m=+0.105378756 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:03:54 compute-0 nova_compute[189279]: 2025-12-10 20:03:54.160 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:56 compute-0 nova_compute[189279]: 2025-12-10 20:03:56.655 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:58 compute-0 podman[244147]: 2025-12-10 20:03:58.116347517 +0000 UTC m=+0.084992157 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 10 20:03:58 compute-0 podman[244148]: 2025-12-10 20:03:58.12236103 +0000 UTC m=+0.096976481 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:03:58 compute-0 podman[244149]: 2025-12-10 20:03:58.141777392 +0000 UTC m=+0.114659476 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, config_id=edpm, vcs-type=git, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible)
Dec 10 20:03:59 compute-0 nova_compute[189279]: 2025-12-10 20:03:59.162 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:03:59 compute-0 nova_compute[189279]: 2025-12-10 20:03:59.299 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:03:59 compute-0 nova_compute[189279]: 2025-12-10 20:03:59.300 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:03:59 compute-0 nova_compute[189279]: 2025-12-10 20:03:59.300 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:03:59 compute-0 nova_compute[189279]: 2025-12-10 20:03:59.301 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:03:59 compute-0 podman[203484]: time="2025-12-10T20:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:03:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:03:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Dec 10 20:04:00 compute-0 nova_compute[189279]: 2025-12-10 20:04:00.381 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:04:00 compute-0 nova_compute[189279]: 2025-12-10 20:04:00.382 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:04:00 compute-0 nova_compute[189279]: 2025-12-10 20:04:00.382 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:04:00 compute-0 nova_compute[189279]: 2025-12-10 20:04:00.383 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12986b74-7b15-4ff4-9019-081950660d4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:04:01 compute-0 openstack_network_exporter[205632]: ERROR   20:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:04:01 compute-0 openstack_network_exporter[205632]: ERROR   20:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:04:01 compute-0 openstack_network_exporter[205632]: ERROR   20:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:04:01 compute-0 openstack_network_exporter[205632]: ERROR   20:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:04:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:04:01 compute-0 openstack_network_exporter[205632]: ERROR   20:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:04:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:04:01 compute-0 nova_compute[189279]: 2025-12-10 20:04:01.657 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.407 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updating instance_info_cache with network_info: [{"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.428 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.429 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.430 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.430 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.431 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.431 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.432 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.432 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.433 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.433 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.463 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.464 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.464 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.464 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.571 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.644 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.647 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.744 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.745 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.812 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.814 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.895 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.904 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.967 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:02 compute-0 nova_compute[189279]: 2025-12-10 20:04:02.969 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.031 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.033 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.100 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.102 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.166 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.177 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.241 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.242 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.312 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.315 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.386 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.388 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.458 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.825 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.826 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4776MB free_disk=72.33087158203125GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.826 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.826 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.906 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.906 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 1fbc523f-accf-4848-80b7-6d997e0c65bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.907 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 26729739-a300-43fe-8678-5294ed41f6ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.907 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.907 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.920 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing inventories for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.934 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating ProviderTree inventory for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.935 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating inventory in ProviderTree for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.948 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing aggregate associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 10 20:04:03 compute-0 nova_compute[189279]: 2025-12-10 20:04:03.968 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing trait associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, traits: COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 10 20:04:04 compute-0 nova_compute[189279]: 2025-12-10 20:04:04.034 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:04:04 compute-0 nova_compute[189279]: 2025-12-10 20:04:04.061 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:04:04 compute-0 nova_compute[189279]: 2025-12-10 20:04:04.062 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:04:04 compute-0 nova_compute[189279]: 2025-12-10 20:04:04.062 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:04:04 compute-0 podman[244240]: 2025-12-10 20:04:04.118824933 +0000 UTC m=+0.082960033 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 20:04:04 compute-0 podman[244239]: 2025-12-10 20:04:04.124350311 +0000 UTC m=+0.104341647 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 10 20:04:04 compute-0 nova_compute[189279]: 2025-12-10 20:04:04.164 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:06 compute-0 nova_compute[189279]: 2025-12-10 20:04:06.660 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:08 compute-0 podman[244283]: 2025-12-10 20:04:08.176888076 +0000 UTC m=+0.132909707 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:04:09 compute-0 nova_compute[189279]: 2025-12-10 20:04:09.167 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:11 compute-0 nova_compute[189279]: 2025-12-10 20:04:11.662 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:14 compute-0 podman[244308]: 2025-12-10 20:04:14.149656924 +0000 UTC m=+0.106941999 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 10 20:04:14 compute-0 nova_compute[189279]: 2025-12-10 20:04:14.170 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:16 compute-0 nova_compute[189279]: 2025-12-10 20:04:16.665 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:19 compute-0 nova_compute[189279]: 2025-12-10 20:04:19.173 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:21 compute-0 podman[244329]: 2025-12-10 20:04:21.139443631 +0000 UTC m=+0.099720004 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-type=git)
Dec 10 20:04:21 compute-0 nova_compute[189279]: 2025-12-10 20:04:21.669 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:23 compute-0 podman[244351]: 2025-12-10 20:04:23.108075357 +0000 UTC m=+0.091712529 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 20:04:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:23.379 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:04:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:23.380 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:04:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:23.380 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:04:24 compute-0 nova_compute[189279]: 2025-12-10 20:04:24.175 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:26 compute-0 nova_compute[189279]: 2025-12-10 20:04:26.672 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:29 compute-0 podman[244374]: 2025-12-10 20:04:29.105149701 +0000 UTC m=+0.082195812 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 10 20:04:29 compute-0 podman[244376]: 2025-12-10 20:04:29.119789055 +0000 UTC m=+0.089569541 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, version=9.4, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64)
Dec 10 20:04:29 compute-0 podman[244375]: 2025-12-10 20:04:29.120747451 +0000 UTC m=+0.093704282 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 10 20:04:29 compute-0 nova_compute[189279]: 2025-12-10 20:04:29.177 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:29 compute-0 podman[203484]: time="2025-12-10T20:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:04:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:04:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec 10 20:04:31 compute-0 openstack_network_exporter[205632]: ERROR   20:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:04:31 compute-0 openstack_network_exporter[205632]: ERROR   20:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:04:31 compute-0 openstack_network_exporter[205632]: ERROR   20:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:04:31 compute-0 openstack_network_exporter[205632]: ERROR   20:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:04:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:04:31 compute-0 openstack_network_exporter[205632]: ERROR   20:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:04:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:04:31 compute-0 nova_compute[189279]: 2025-12-10 20:04:31.674 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:34 compute-0 nova_compute[189279]: 2025-12-10 20:04:34.178 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:35 compute-0 podman[244430]: 2025-12-10 20:04:35.112737765 +0000 UTC m=+0.081487605 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 10 20:04:35 compute-0 podman[244431]: 2025-12-10 20:04:35.134806738 +0000 UTC m=+0.095465190 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:04:36 compute-0 nova_compute[189279]: 2025-12-10 20:04:36.677 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:39 compute-0 podman[244468]: 2025-12-10 20:04:39.133382439 +0000 UTC m=+0.115248422 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 10 20:04:39 compute-0 nova_compute[189279]: 2025-12-10 20:04:39.181 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:41 compute-0 nova_compute[189279]: 2025-12-10 20:04:41.681 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:44 compute-0 nova_compute[189279]: 2025-12-10 20:04:44.184 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:44 compute-0 podman[244491]: 2025-12-10 20:04:44.761940624 +0000 UTC m=+0.086114807 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec 10 20:04:46 compute-0 nova_compute[189279]: 2025-12-10 20:04:46.682 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:47 compute-0 nova_compute[189279]: 2025-12-10 20:04:47.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:47 compute-0 nova_compute[189279]: 2025-12-10 20:04:47.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 10 20:04:47 compute-0 nova_compute[189279]: 2025-12-10 20:04:47.512 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 10 20:04:49 compute-0 nova_compute[189279]: 2025-12-10 20:04:49.187 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:51 compute-0 nova_compute[189279]: 2025-12-10 20:04:51.685 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:52 compute-0 podman[244511]: 2025-12-10 20:04:52.136328193 +0000 UTC m=+0.108113190 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.254 189283 DEBUG nova.compute.manager [req-0eb9ff16-de3d-48cd-a4bc-e131b1dd73c3 req-ce4ecdd9-7168-4184-bfe5-46eebc518478 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received event network-changed-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.254 189283 DEBUG nova.compute.manager [req-0eb9ff16-de3d-48cd-a4bc-e131b1dd73c3 req-ce4ecdd9-7168-4184-bfe5-46eebc518478 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Refreshing instance network info cache due to event network-changed-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.254 189283 DEBUG oslo_concurrency.lockutils [req-0eb9ff16-de3d-48cd-a4bc-e131b1dd73c3 req-ce4ecdd9-7168-4184-bfe5-46eebc518478 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-1fbc523f-accf-4848-80b7-6d997e0c65bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.255 189283 DEBUG oslo_concurrency.lockutils [req-0eb9ff16-de3d-48cd-a4bc-e131b1dd73c3 req-ce4ecdd9-7168-4184-bfe5-46eebc518478 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-1fbc523f-accf-4848-80b7-6d997e0c65bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.255 189283 DEBUG nova.network.neutron [req-0eb9ff16-de3d-48cd-a4bc-e131b1dd73c3 req-ce4ecdd9-7168-4184-bfe5-46eebc518478 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Refreshing network info cache for port b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.507 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.508 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.508 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.509 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.509 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.991 189283 DEBUG oslo_concurrency.lockutils [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "1fbc523f-accf-4848-80b7-6d997e0c65bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.992 189283 DEBUG oslo_concurrency.lockutils [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.992 189283 DEBUG oslo_concurrency.lockutils [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.993 189283 DEBUG oslo_concurrency.lockutils [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.993 189283 DEBUG oslo_concurrency.lockutils [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.995 189283 INFO nova.compute.manager [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Terminating instance
Dec 10 20:04:53 compute-0 nova_compute[189279]: 2025-12-10 20:04:53.997 189283 DEBUG nova.compute.manager [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:04:54 compute-0 kernel: tapb4b01034-4b (unregistering): left promiscuous mode
Dec 10 20:04:54 compute-0 NetworkManager[56238]: <info>  [1765397094.0444] device (tapb4b01034-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:04:54 compute-0 ovn_controller[97701]: 2025-12-10T20:04:54Z|00061|binding|INFO|Releasing lport b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 from this chassis (sb_readonly=0)
Dec 10 20:04:54 compute-0 ovn_controller[97701]: 2025-12-10T20:04:54Z|00062|binding|INFO|Setting lport b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 down in Southbound
Dec 10 20:04:54 compute-0 ovn_controller[97701]: 2025-12-10T20:04:54Z|00063|binding|INFO|Removing iface tapb4b01034-4b ovn-installed in OVS
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.066 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.068 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:db:85 192.168.0.7'], port_security=['fa:16:3e:85:db:85 192.168.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-pjemjxzxegr5-dwjurop636bf-do4uo7veagb7-port-q3eae3svln7f', 'neutron:cidrs': '192.168.0.7/24', 'neutron:device_id': '1fbc523f-accf-4848-80b7-6d997e0c65bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-pjemjxzxegr5-dwjurop636bf-do4uo7veagb7-port-q3eae3svln7f', 'neutron:project_id': 'fe518ea62a94467e823b2b1046c57a2e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbfca0ae-785e-46d9-973d-467188fc7f6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a76816-4d6e-46b1-bfa0-919a54b6e056, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.070 106564 INFO neutron.agent.ovn.metadata.agent [-] Port b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 in datapath e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 unbound from our chassis
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.072 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e55a1ff5-f742-4bad-ae9c-2f6d4795fa29
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.080 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.097 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[0d90c883-02b4-4ea9-b8bc-36d109a80fda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec 10 20:04:54 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 1min 9.600s CPU time.
Dec 10 20:04:54 compute-0 systemd-machined[155642]: Machine qemu-3-instance-00000004 terminated.
Dec 10 20:04:54 compute-0 podman[244533]: 2025-12-10 20:04:54.129100499 +0000 UTC m=+0.096206819 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.141 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[69e58951-c0fa-4079-be7f-ade68ec9747b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.145 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[d81fc1a2-6556-4c7e-8dfe-c28554cddd51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.173 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[f5906402-647e-4cc5-9378-56480fd14b71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.182 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.183 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.189 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.193 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[077bc274-fd54-480f-adc6-bb28fcd2dcaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape55a1ff5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:f6:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 372629, 'reachable_time': 29055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244569, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.212 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c66ace4b-8a1a-4106-84da-060304dc7953]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372645, 'tstamp': 372645}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244570, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372649, 'tstamp': 372649}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244570, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.214 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape55a1ff5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.215 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.223 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 kernel: tapb4b01034-4b: entered promiscuous mode
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.224 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape55a1ff5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.225 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.225 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape55a1ff5-f0, col_values=(('external_ids', {'iface-id': 'f70c9140-d0bb-473b-94ef-0336fe52cbb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.229 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.230 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 20:04:54 compute-0 kernel: tapb4b01034-4b (unregistering): left promiscuous mode
Dec 10 20:04:54 compute-0 NetworkManager[56238]: <info>  [1765397094.2361] manager: (tapb4b01034-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Dec 10 20:04:54 compute-0 ovn_controller[97701]: 2025-12-10T20:04:54Z|00064|binding|INFO|Claiming lport b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 for this chassis.
Dec 10 20:04:54 compute-0 ovn_controller[97701]: 2025-12-10T20:04:54Z|00065|binding|INFO|b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70: Claiming fa:16:3e:85:db:85 192.168.0.7
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.238 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.247 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:db:85 192.168.0.7'], port_security=['fa:16:3e:85:db:85 192.168.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-pjemjxzxegr5-dwjurop636bf-do4uo7veagb7-port-q3eae3svln7f', 'neutron:cidrs': '192.168.0.7/24', 'neutron:device_id': '1fbc523f-accf-4848-80b7-6d997e0c65bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-pjemjxzxegr5-dwjurop636bf-do4uo7veagb7-port-q3eae3svln7f', 'neutron:project_id': 'fe518ea62a94467e823b2b1046c57a2e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbfca0ae-785e-46d9-973d-467188fc7f6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a76816-4d6e-46b1-bfa0-919a54b6e056, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.248 106564 INFO neutron.agent.ovn.metadata.agent [-] Port b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 in datapath e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 bound to our chassis
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.249 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e55a1ff5-f742-4bad-ae9c-2f6d4795fa29
Dec 10 20:04:54 compute-0 ovn_controller[97701]: 2025-12-10T20:04:54Z|00066|binding|INFO|Setting lport b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 ovn-installed in OVS
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.257 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 ovn_controller[97701]: 2025-12-10T20:04:54Z|00067|binding|INFO|Setting lport b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 up in Southbound
Dec 10 20:04:54 compute-0 ovn_controller[97701]: 2025-12-10T20:04:54Z|00068|binding|INFO|Releasing lport b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 from this chassis (sb_readonly=1)
Dec 10 20:04:54 compute-0 ovn_controller[97701]: 2025-12-10T20:04:54Z|00069|if_status|INFO|Not setting lport b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 down as sb is readonly
Dec 10 20:04:54 compute-0 ovn_controller[97701]: 2025-12-10T20:04:54Z|00070|binding|INFO|Removing iface tapb4b01034-4b ovn-installed in OVS
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.261 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 ovn_controller[97701]: 2025-12-10T20:04:54Z|00071|binding|INFO|Releasing lport b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 from this chassis (sb_readonly=0)
Dec 10 20:04:54 compute-0 ovn_controller[97701]: 2025-12-10T20:04:54Z|00072|binding|INFO|Setting lport b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 down in Southbound
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.271 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.272 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:db:85 192.168.0.7'], port_security=['fa:16:3e:85:db:85 192.168.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-pjemjxzxegr5-dwjurop636bf-do4uo7veagb7-port-q3eae3svln7f', 'neutron:cidrs': '192.168.0.7/24', 'neutron:device_id': '1fbc523f-accf-4848-80b7-6d997e0c65bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-pjemjxzxegr5-dwjurop636bf-do4uo7veagb7-port-q3eae3svln7f', 'neutron:project_id': 'fe518ea62a94467e823b2b1046c57a2e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbfca0ae-785e-46d9-973d-467188fc7f6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a76816-4d6e-46b1-bfa0-919a54b6e056, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.271 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[946eb6b1-c2ec-445b-905e-964c86371b2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.308 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[8b044d7c-c764-4778-8d1b-0d98c289bcc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.312 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[f94c4459-88b3-437a-b626-784228ffa132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.322 189283 INFO nova.virt.libvirt.driver [-] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Instance destroyed successfully.
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.323 189283 DEBUG nova.objects.instance [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'resources' on Instance uuid 1fbc523f-accf-4848-80b7-6d997e0c65bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.340 189283 DEBUG nova.virt.libvirt.vif [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T19:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xzxegr5-dwjurop636bf-do4uo7veagb7-vnf-yn34ze3ueztp',id=4,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-10T19:59:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='9d7a68be-d216-4b06-b611-878d356c6d68'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-eobt2g4q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T19:59:11Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MTYxOTUwNTEyOTY5NjY0MDIwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgxNjE5NTA1MTI5Njk2NjQwMjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODE2MTk1MDUxMjk2OTY2NDAyMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgxNjE5NTA1MTI5Njk2NjQwMjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MTYxOTUwNTEyOTY5NjY0MDIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MTYxOTUwNTEyOTY5NjY0MDIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Dec 10 20:04:54 compute-0 nova_compute[189279]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODE2MTk1MDUxMjk2OTY2NDAyMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgxNjE5NTA1MTI5Njk2NjQwMjA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MTYxOTUwNTEyOTY5NjY0MDIwPT0tLQo=',user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=1fbc523f-accf-4848-80b7-6d997e0c65bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "address": "fa:16:3e:85:db:85", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4b01034-4b", "ovs_interfaceid": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.340 189283 DEBUG nova.network.os_vif_util [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "address": "fa:16:3e:85:db:85", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4b01034-4b", "ovs_interfaceid": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.341 189283 DEBUG nova.network.os_vif_util [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:db:85,bridge_name='br-int',has_traffic_filtering=True,id=b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb4b01034-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.341 189283 DEBUG os_vif [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:db:85,bridge_name='br-int',has_traffic_filtering=True,id=b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb4b01034-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.343 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.343 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4b01034-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.345 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.347 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.346 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[d122bc14-7008-4aeb-8182-b2fc8787aac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.349 189283 INFO os_vif [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:db:85,bridge_name='br-int',has_traffic_filtering=True,id=b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb4b01034-4b')
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.350 189283 INFO nova.virt.libvirt.driver [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Deleting instance files /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf_del
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.351 189283 INFO nova.virt.libvirt.driver [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Deletion of /var/lib/nova/instances/1fbc523f-accf-4848-80b7-6d997e0c65bf_del complete
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.372 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[5717f9a6-d180-40ac-b8d9-a906b1a0b4ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape55a1ff5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:f6:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 19, 'rx_bytes': 658, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 19, 'rx_bytes': 658, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 372629, 'reachable_time': 29055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244595, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.395 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff8b12d-a3c9-4df7-bb37-8a187de8fe95]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372645, 'tstamp': 372645}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244596, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372649, 'tstamp': 372649}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244596, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.396 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape55a1ff5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.398 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.399 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.400 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape55a1ff5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.400 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.400 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape55a1ff5-f0, col_values=(('external_ids', {'iface-id': 'f70c9140-d0bb-473b-94ef-0336fe52cbb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.401 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.401 106564 INFO neutron.agent.ovn.metadata.agent [-] Port b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 in datapath e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 unbound from our chassis
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.402 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e55a1ff5-f742-4bad-ae9c-2f6d4795fa29
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.408 189283 DEBUG nova.compute.manager [req-aa802739-8337-4fe6-8718-71264fb1ca39 req-7b8c2b36-6d7a-47c1-bcf3-f5a780853892 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received event network-vif-unplugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.408 189283 DEBUG oslo_concurrency.lockutils [req-aa802739-8337-4fe6-8718-71264fb1ca39 req-7b8c2b36-6d7a-47c1-bcf3-f5a780853892 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.409 189283 DEBUG oslo_concurrency.lockutils [req-aa802739-8337-4fe6-8718-71264fb1ca39 req-7b8c2b36-6d7a-47c1-bcf3-f5a780853892 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.409 189283 DEBUG oslo_concurrency.lockutils [req-aa802739-8337-4fe6-8718-71264fb1ca39 req-7b8c2b36-6d7a-47c1-bcf3-f5a780853892 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.410 189283 DEBUG nova.compute.manager [req-aa802739-8337-4fe6-8718-71264fb1ca39 req-7b8c2b36-6d7a-47c1-bcf3-f5a780853892 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] No waiting events found dispatching network-vif-unplugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.410 189283 DEBUG nova.compute.manager [req-aa802739-8337-4fe6-8718-71264fb1ca39 req-7b8c2b36-6d7a-47c1-bcf3-f5a780853892 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received event network-vif-unplugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.419 189283 INFO nova.compute.manager [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Took 0.42 seconds to destroy the instance on the hypervisor.
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.420 189283 DEBUG oslo.service.loopingcall [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.420 189283 DEBUG nova.compute.manager [-] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.421 189283 DEBUG nova.network.neutron [-] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.422 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[f8507e13-5349-4446-a0a8-bf76c1fd3464]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.454 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0ed83e-2db0-4421-9ab1-a40800663df2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.460 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc0acd7-6c86-4d96-bdf0-10e5ea4c736e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.482 189283 DEBUG nova.network.neutron [req-0eb9ff16-de3d-48cd-a4bc-e131b1dd73c3 req-ce4ecdd9-7168-4184-bfe5-46eebc518478 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Updated VIF entry in instance network info cache for port b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.482 189283 DEBUG nova.network.neutron [req-0eb9ff16-de3d-48cd-a4bc-e131b1dd73c3 req-ce4ecdd9-7168-4184-bfe5-46eebc518478 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Updating instance_info_cache with network_info: [{"id": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "address": "fa:16:3e:85:db:85", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4b01034-4b", "ovs_interfaceid": "b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.501 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[e75c6259-27bc-4db7-8bfb-53737dbe2637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 rsyslogd[236537]: message too long (8192) with configured size 8096, begin of message is: 2025-12-10 20:04:54.340 189283 DEBUG nova.virt.libvirt.vif [None req-47923f58-34 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.516 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.517 189283 DEBUG oslo_concurrency.lockutils [req-0eb9ff16-de3d-48cd-a4bc-e131b1dd73c3 req-ce4ecdd9-7168-4184-bfe5-46eebc518478 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-1fbc523f-accf-4848-80b7-6d997e0c65bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.522 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[bf680da2-58de-4a5a-a9e7-dfd68a13ca09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape55a1ff5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:f6:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 21, 'rx_bytes': 658, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 21, 'rx_bytes': 658, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 372629, 'reachable_time': 29055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244602, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.539 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[85f2462c-e053-47e9-a111-744afeda5364]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372645, 'tstamp': 372645}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244603, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372649, 'tstamp': 372649}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244603, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.541 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape55a1ff5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.542 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.544 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.544 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape55a1ff5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.545 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.545 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape55a1ff5-f0, col_values=(('external_ids', {'iface-id': 'f70c9140-d0bb-473b-94ef-0336fe52cbb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:04:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:04:54.545 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.698 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.698 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:04:54 compute-0 nova_compute[189279]: 2025-12-10 20:04:54.699 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:04:55 compute-0 nova_compute[189279]: 2025-12-10 20:04:55.632 189283 DEBUG nova.network.neutron [-] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:04:55 compute-0 nova_compute[189279]: 2025-12-10 20:04:55.650 189283 INFO nova.compute.manager [-] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Took 1.23 seconds to deallocate network for instance.
Dec 10 20:04:55 compute-0 nova_compute[189279]: 2025-12-10 20:04:55.688 189283 DEBUG oslo_concurrency.lockutils [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:04:55 compute-0 nova_compute[189279]: 2025-12-10 20:04:55.689 189283 DEBUG oslo_concurrency.lockutils [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.002 189283 DEBUG nova.compute.provider_tree [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.030 189283 DEBUG nova.scheduler.client.report [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.063 189283 DEBUG oslo_concurrency.lockutils [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.146 189283 INFO nova.scheduler.client.report [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Deleted allocations for instance 1fbc523f-accf-4848-80b7-6d997e0c65bf
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.211 189283 DEBUG oslo_concurrency.lockutils [None req-47923f58-344b-4918-9de0-685271103847 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.228 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Updating instance_info_cache with network_info: [{"id": "0785494f-981a-4c23-8e42-a15d0c582bfb", "address": "fa:16:3e:0b:ad:37", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0785494f-98", "ovs_interfaceid": "0785494f-981a-4c23-8e42-a15d0c582bfb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.248 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.249 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.249 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.249 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.249 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.271 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.271 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.272 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.272 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.361 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.420 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.421 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.481 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.482 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.507 189283 DEBUG nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received event network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.508 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.508 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.508 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.509 189283 DEBUG nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] No waiting events found dispatching network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.509 189283 WARNING nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received unexpected event network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 for instance with vm_state deleted and task_state None.
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.509 189283 DEBUG nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received event network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.509 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.509 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.510 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.510 189283 DEBUG nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] No waiting events found dispatching network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.510 189283 WARNING nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received unexpected event network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 for instance with vm_state deleted and task_state None.
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.510 189283 DEBUG nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received event network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.510 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.510 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.511 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.511 189283 DEBUG nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] No waiting events found dispatching network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.511 189283 WARNING nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received unexpected event network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 for instance with vm_state deleted and task_state None.
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.511 189283 DEBUG nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received event network-vif-unplugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.511 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.511 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.512 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.512 189283 DEBUG nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] No waiting events found dispatching network-vif-unplugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.512 189283 WARNING nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received unexpected event network-vif-unplugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 for instance with vm_state deleted and task_state None.
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.512 189283 DEBUG nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received event network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.512 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.512 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.513 189283 DEBUG oslo_concurrency.lockutils [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1fbc523f-accf-4848-80b7-6d997e0c65bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.513 189283 DEBUG nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] No waiting events found dispatching network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.513 189283 WARNING nova.compute.manager [req-9f1a6976-2596-456b-aaee-3f75378cf6db req-3b6c3100-def3-4c68-8290-3ac2cc1103f0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Received unexpected event network-vif-plugged-b4b01034-4bf7-4f7a-943a-2a4ccbf3ca70 for instance with vm_state deleted and task_state None.
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.547 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.547 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.640 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.650 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.688 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.711 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.712 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.775 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.776 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.841 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.842 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:04:56 compute-0 nova_compute[189279]: 2025-12-10 20:04:56.906 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.267 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.269 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4938MB free_disk=72.3531608581543GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.269 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.270 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.355 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.356 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 26729739-a300-43fe-8678-5294ed41f6ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.357 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.358 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.421 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.437 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.461 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.462 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.463 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.714 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:57 compute-0 nova_compute[189279]: 2025-12-10 20:04:57.752 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:04:59 compute-0 nova_compute[189279]: 2025-12-10 20:04:59.346 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:04:59 compute-0 podman[203484]: time="2025-12-10T20:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:04:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:04:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec 10 20:05:00 compute-0 podman[244629]: 2025-12-10 20:05:00.087250999 +0000 UTC m=+0.068112554 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 10 20:05:00 compute-0 podman[244631]: 2025-12-10 20:05:00.095192282 +0000 UTC m=+0.072628475 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=edpm, name=ubi9, vcs-type=git, architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 10 20:05:00 compute-0 podman[244630]: 2025-12-10 20:05:00.123432872 +0000 UTC m=+0.103361712 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 10 20:05:01 compute-0 anacron[7483]: Job `cron.monthly' started
Dec 10 20:05:01 compute-0 anacron[7483]: Job `cron.monthly' terminated
Dec 10 20:05:01 compute-0 anacron[7483]: Normal exit (3 jobs run)
Dec 10 20:05:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:05:01.233 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:05:01 compute-0 openstack_network_exporter[205632]: ERROR   20:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:05:01 compute-0 openstack_network_exporter[205632]: ERROR   20:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:05:01 compute-0 openstack_network_exporter[205632]: ERROR   20:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:05:01 compute-0 openstack_network_exporter[205632]: ERROR   20:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:05:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:05:01 compute-0 openstack_network_exporter[205632]: ERROR   20:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:05:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:05:01 compute-0 nova_compute[189279]: 2025-12-10 20:05:01.690 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:04 compute-0 nova_compute[189279]: 2025-12-10 20:05:04.350 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:04 compute-0 nova_compute[189279]: 2025-12-10 20:05:04.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:05:04 compute-0 nova_compute[189279]: 2025-12-10 20:05:04.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 10 20:05:06 compute-0 podman[244690]: 2025-12-10 20:05:06.090456467 +0000 UTC m=+0.064325881 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 20:05:06 compute-0 podman[244689]: 2025-12-10 20:05:06.118150593 +0000 UTC m=+0.099698903 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 10 20:05:06 compute-0 nova_compute[189279]: 2025-12-10 20:05:06.692 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:07 compute-0 sshd-session[244731]: Connection closed by 80.94.92.184 port 54746
Dec 10 20:05:09 compute-0 nova_compute[189279]: 2025-12-10 20:05:09.321 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765397094.3201077, 1fbc523f-accf-4848-80b7-6d997e0c65bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:05:09 compute-0 nova_compute[189279]: 2025-12-10 20:05:09.321 189283 INFO nova.compute.manager [-] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] VM Stopped (Lifecycle Event)
Dec 10 20:05:09 compute-0 nova_compute[189279]: 2025-12-10 20:05:09.355 189283 DEBUG nova.compute.manager [None req-984d2b72-5224-4729-b48a-ec0806489f69 - - - - - -] [instance: 1fbc523f-accf-4848-80b7-6d997e0c65bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:05:09 compute-0 nova_compute[189279]: 2025-12-10 20:05:09.356 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:10 compute-0 podman[244732]: 2025-12-10 20:05:10.137632604 +0000 UTC m=+0.111419659 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:05:11 compute-0 nova_compute[189279]: 2025-12-10 20:05:11.697 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:14 compute-0 nova_compute[189279]: 2025-12-10 20:05:14.359 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:14 compute-0 sshd-session[244757]: Accepted publickey for zuul from 38.102.83.132 port 41222 ssh2: RSA SHA256:L/SCRhDD2hlgP35vi6MGkgCM80jHQm/zqk6LaU3Vz9U
Dec 10 20:05:14 compute-0 systemd-logind[789]: New session 30 of user zuul.
Dec 10 20:05:14 compute-0 systemd[1]: Started Session 30 of User zuul.
Dec 10 20:05:14 compute-0 sshd-session[244757]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 20:05:15 compute-0 podman[244908]: 2025-12-10 20:05:15.0959952 +0000 UTC m=+0.071310730 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, tcib_managed=true)
Dec 10 20:05:15 compute-0 sudo[244951]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhfvysmgqzcmtabqkorzsyftzlnaybpv ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765397114.542374-58575-72733974778769/AnsiballZ_command.py'
Dec 10 20:05:15 compute-0 sudo[244951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 20:05:15 compute-0 python3[244956]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 20:05:15 compute-0 sudo[244951]: pam_unix(sudo:session): session closed for user root
Dec 10 20:05:16 compute-0 nova_compute[189279]: 2025-12-10 20:05:16.700 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:19 compute-0 nova_compute[189279]: 2025-12-10 20:05:19.363 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:21 compute-0 nova_compute[189279]: 2025-12-10 20:05:21.703 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:23 compute-0 podman[244997]: 2025-12-10 20:05:23.115699719 +0000 UTC m=+0.097390971 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Dec 10 20:05:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:05:23.382 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:05:23.382 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:05:23.384 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:24 compute-0 nova_compute[189279]: 2025-12-10 20:05:24.367 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:25 compute-0 podman[245020]: 2025-12-10 20:05:25.125674698 +0000 UTC m=+0.095826089 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 20:05:26 compute-0 nova_compute[189279]: 2025-12-10 20:05:26.705 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:28 compute-0 ovn_controller[97701]: 2025-12-10T20:05:28Z|00073|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 10 20:05:29 compute-0 nova_compute[189279]: 2025-12-10 20:05:29.314 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:05:29 compute-0 nova_compute[189279]: 2025-12-10 20:05:29.344 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Triggering sync for uuid 12986b74-7b15-4ff4-9019-081950660d4b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 10 20:05:29 compute-0 nova_compute[189279]: 2025-12-10 20:05:29.344 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Triggering sync for uuid 26729739-a300-43fe-8678-5294ed41f6ed _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 10 20:05:29 compute-0 nova_compute[189279]: 2025-12-10 20:05:29.345 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "12986b74-7b15-4ff4-9019-081950660d4b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:29 compute-0 nova_compute[189279]: 2025-12-10 20:05:29.345 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "12986b74-7b15-4ff4-9019-081950660d4b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:29 compute-0 nova_compute[189279]: 2025-12-10 20:05:29.345 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "26729739-a300-43fe-8678-5294ed41f6ed" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:29 compute-0 nova_compute[189279]: 2025-12-10 20:05:29.346 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "26729739-a300-43fe-8678-5294ed41f6ed" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:29 compute-0 nova_compute[189279]: 2025-12-10 20:05:29.370 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:29 compute-0 nova_compute[189279]: 2025-12-10 20:05:29.394 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "12986b74-7b15-4ff4-9019-081950660d4b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:29 compute-0 nova_compute[189279]: 2025-12-10 20:05:29.396 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "26729739-a300-43fe-8678-5294ed41f6ed" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:29 compute-0 podman[203484]: time="2025-12-10T20:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:05:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:05:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.061 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "b46f5142-0287-480c-a9d8-fb9b8c0d3587" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.061 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "b46f5142-0287-480c-a9d8-fb9b8c0d3587" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.078 189283 DEBUG nova.compute.manager [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 20:05:31 compute-0 podman[245044]: 2025-12-10 20:05:31.088669953 +0000 UTC m=+0.070642481 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:05:31 compute-0 podman[245045]: 2025-12-10 20:05:31.089112215 +0000 UTC m=+0.067233389 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:05:31 compute-0 podman[245046]: 2025-12-10 20:05:31.107343486 +0000 UTC m=+0.075780150 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, release-0.7.12=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=edpm, managed_by=edpm_ansible, name=ubi9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.156 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.156 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.174 189283 DEBUG nova.virt.hardware [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.174 189283 INFO nova.compute.claims [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Claim successful on node compute-0.ctlplane.example.com
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.317 189283 DEBUG nova.compute.provider_tree [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.332 189283 DEBUG nova.scheduler.client.report [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.350 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.351 189283 DEBUG nova.compute.manager [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.399 189283 DEBUG nova.compute.manager [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.413 189283 INFO nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 20:05:31 compute-0 openstack_network_exporter[205632]: ERROR   20:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:05:31 compute-0 openstack_network_exporter[205632]: ERROR   20:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:05:31 compute-0 openstack_network_exporter[205632]: ERROR   20:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:05:31 compute-0 openstack_network_exporter[205632]: ERROR   20:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:05:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:05:31 compute-0 openstack_network_exporter[205632]: ERROR   20:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:05:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.446 189283 DEBUG nova.compute.manager [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.526 189283 DEBUG nova.compute.manager [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.527 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.528 189283 INFO nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Creating image(s)
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.529 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "/var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.529 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.530 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.531 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "490d50a9caa1916c71e31166385320ae93d214b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.532 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "490d50a9caa1916c71e31166385320ae93d214b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:31 compute-0 nova_compute[189279]: 2025-12-10 20:05:31.709 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:32 compute-0 nova_compute[189279]: 2025-12-10 20:05:32.763 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:32 compute-0 nova_compute[189279]: 2025-12-10 20:05:32.830 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6.part --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:32 compute-0 nova_compute[189279]: 2025-12-10 20:05:32.831 189283 DEBUG nova.virt.images [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] 7f822ef0-9d45-454e-ab2e-e4b757992d9f was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 10 20:05:32 compute-0 nova_compute[189279]: 2025-12-10 20:05:32.833 189283 DEBUG nova.privsep.utils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 10 20:05:32 compute-0 nova_compute[189279]: 2025-12-10 20:05:32.833 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6.part /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.024 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6.part /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6.converted" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.036 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.102 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6.converted --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.105 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "490d50a9caa1916c71e31166385320ae93d214b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.126 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.192 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.193 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "490d50a9caa1916c71e31166385320ae93d214b6" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.194 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "490d50a9caa1916c71e31166385320ae93d214b6" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.206 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.269 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.271 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6,backing_fmt=raw /var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.324 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6,backing_fmt=raw /var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk 1073741824" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.325 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "490d50a9caa1916c71e31166385320ae93d214b6" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.326 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.415 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.417 189283 DEBUG nova.virt.disk.api [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Checking if we can resize image /var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.418 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.494 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.496 189283 DEBUG nova.virt.disk.api [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Cannot resize image /var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.497 189283 DEBUG nova.objects.instance [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'migration_context' on Instance uuid b46f5142-0287-480c-a9d8-fb9b8c0d3587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.515 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "/var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.516 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.517 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "/var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.547 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.650 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.653 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.654 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.680 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.772 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.773 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.830 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.eph0 1073741824" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.832 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.833 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.896 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.898 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.898 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Ensure instance console log exists: /var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.898 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.899 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.899 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.901 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-10T20:05:19Z,direct_url=<?>,disk_format='qcow2',id=7f822ef0-9d45-454e-ab2e-e4b757992d9f,min_disk=0,min_ram=0,name='fvt_testing_image',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-10T20:05:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '7f822ef0-9d45-454e-ab2e-e4b757992d9f'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 1, 'encryption_options': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.907 189283 WARNING nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.915 189283 DEBUG nova.virt.libvirt.host [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.916 189283 DEBUG nova.virt.libvirt.host [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.920 189283 DEBUG nova.virt.libvirt.host [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.920 189283 DEBUG nova.virt.libvirt.host [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.921 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.921 189283 DEBUG nova.virt.hardware [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T20:05:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='fa657b91-0ede-4606-b72a-342d514829df',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-10T20:05:19Z,direct_url=<?>,disk_format='qcow2',id=7f822ef0-9d45-454e-ab2e-e4b757992d9f,min_disk=0,min_ram=0,name='fvt_testing_image',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-10T20:05:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.921 189283 DEBUG nova.virt.hardware [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.922 189283 DEBUG nova.virt.hardware [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.922 189283 DEBUG nova.virt.hardware [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.922 189283 DEBUG nova.virt.hardware [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.922 189283 DEBUG nova.virt.hardware [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.922 189283 DEBUG nova.virt.hardware [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.922 189283 DEBUG nova.virt.hardware [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.923 189283 DEBUG nova.virt.hardware [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.923 189283 DEBUG nova.virt.hardware [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.923 189283 DEBUG nova.virt.hardware [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.927 189283 DEBUG nova.objects.instance [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'pci_devices' on Instance uuid b46f5142-0287-480c-a9d8-fb9b8c0d3587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:05:33 compute-0 nova_compute[189279]: 2025-12-10 20:05:33.947 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] End _get_guest_xml xml=<domain type="kvm">
Dec 10 20:05:33 compute-0 nova_compute[189279]:   <uuid>b46f5142-0287-480c-a9d8-fb9b8c0d3587</uuid>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   <name>instance-00000006</name>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   <memory>524288</memory>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   <metadata>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <nova:name>fvt_testing_server</nova:name>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 20:05:33</nova:creationTime>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <nova:flavor name="fvt_testing_flavor">
Dec 10 20:05:33 compute-0 nova_compute[189279]:         <nova:memory>512</nova:memory>
Dec 10 20:05:33 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 20:05:33 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 20:05:33 compute-0 nova_compute[189279]:         <nova:ephemeral>1</nova:ephemeral>
Dec 10 20:05:33 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 20:05:33 compute-0 nova_compute[189279]:         <nova:user uuid="2143e69e49fd49db99c8737c973c1ea5">admin</nova:user>
Dec 10 20:05:33 compute-0 nova_compute[189279]:         <nova:project uuid="fe518ea62a94467e823b2b1046c57a2e">admin</nova:project>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="7f822ef0-9d45-454e-ab2e-e4b757992d9f"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <nova:ports/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   </metadata>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <system>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <entry name="serial">b46f5142-0287-480c-a9d8-fb9b8c0d3587</entry>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <entry name="uuid">b46f5142-0287-480c-a9d8-fb9b8c0d3587</entry>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     </system>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   <os>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   </os>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   <features>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <apic/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   </features>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   </clock>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   </cpu>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   <devices>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.eph0"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <target dev="vdb" bus="virtio"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.config"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/console.log" append="off"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     </serial>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <video>
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     </video>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     </rng>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 20:05:33 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 20:05:33 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 20:05:33 compute-0 nova_compute[189279]:   </devices>
Dec 10 20:05:33 compute-0 nova_compute[189279]: </domain>
Dec 10 20:05:33 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 20:05:34 compute-0 nova_compute[189279]: 2025-12-10 20:05:34.010 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:05:34 compute-0 nova_compute[189279]: 2025-12-10 20:05:34.011 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:05:34 compute-0 nova_compute[189279]: 2025-12-10 20:05:34.011 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:05:34 compute-0 nova_compute[189279]: 2025-12-10 20:05:34.011 189283 INFO nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Using config drive
Dec 10 20:05:34 compute-0 nova_compute[189279]: 2025-12-10 20:05:34.374 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:34 compute-0 nova_compute[189279]: 2025-12-10 20:05:34.420 189283 INFO nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Creating config drive at /var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.config
Dec 10 20:05:34 compute-0 nova_compute[189279]: 2025-12-10 20:05:34.425 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyprk7x8l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:34 compute-0 nova_compute[189279]: 2025-12-10 20:05:34.549 189283 DEBUG oslo_concurrency.processutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyprk7x8l" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:34 compute-0 systemd-machined[155642]: New machine qemu-6-instance-00000006.
Dec 10 20:05:34 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.222 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397135.2213542, b46f5142-0287-480c-a9d8-fb9b8c0d3587 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.224 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] VM Resumed (Lifecycle Event)
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.226 189283 DEBUG nova.compute.manager [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.226 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.231 189283 INFO nova.virt.libvirt.driver [-] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Instance spawned successfully.
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.231 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.254 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.264 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.264 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.266 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.267 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.268 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.268 189283 DEBUG nova.virt.libvirt.driver [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.275 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.306 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.307 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397135.2226498, b46f5142-0287-480c-a9d8-fb9b8c0d3587 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.307 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] VM Started (Lifecycle Event)
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.325 189283 INFO nova.compute.manager [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Took 3.80 seconds to spawn the instance on the hypervisor.
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.325 189283 DEBUG nova.compute.manager [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.326 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.336 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.379 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.404 189283 INFO nova.compute.manager [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Took 4.28 seconds to build instance.
Dec 10 20:05:35 compute-0 nova_compute[189279]: 2025-12-10 20:05:35.425 189283 DEBUG oslo_concurrency.lockutils [None req-5210afd7-52f9-4463-8d52-6761711fdc0f 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "b46f5142-0287-480c-a9d8-fb9b8c0d3587" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:36 compute-0 nova_compute[189279]: 2025-12-10 20:05:36.712 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:37 compute-0 podman[245171]: 2025-12-10 20:05:37.118771814 +0000 UTC m=+0.080268530 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:05:37 compute-0 podman[245170]: 2025-12-10 20:05:37.141132696 +0000 UTC m=+0.117584915 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 10 20:05:37 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 10 20:05:37 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 10 20:05:39 compute-0 nova_compute[189279]: 2025-12-10 20:05:39.379 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:41 compute-0 podman[245230]: 2025-12-10 20:05:41.119335639 +0000 UTC m=+0.098903051 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 10 20:05:41 compute-0 nova_compute[189279]: 2025-12-10 20:05:41.714 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.175 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.177 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.178 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.179 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.180 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.180 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.180 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.180 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.180 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.180 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.184 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '12986b74-7b15-4ff4-9019-081950660d4b', 'name': 'test_0', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.187 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '26729739-a300-43fe-8678-5294ed41f6ed', 'name': 'vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {'metering.server_group': '9d7a68be-d216-4b06-b611-878d356c6d68'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.189 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b46f5142-0287-480c-a9d8-fb9b8c0d3587 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 10 20:05:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:42.191 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b46f5142-0287-480c-a9d8-fb9b8c0d3587 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a6b7809c80638f6e016296d2f243706fded356213cefb5a3f70c31b120afa2c9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.447 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1572 Content-Type: application/json Date: Wed, 10 Dec 2025 20:05:42 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ba6723d7-31f5-4062-a6cb-154e20deb135 x-openstack-request-id: req-ba6723d7-31f5-4062-a6cb-154e20deb135 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.447 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b46f5142-0287-480c-a9d8-fb9b8c0d3587", "name": "fvt_testing_server", "status": "ACTIVE", "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "user_id": "2143e69e49fd49db99c8737c973c1ea5", "metadata": {}, "hostId": "dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852", "image": {"id": "7f822ef0-9d45-454e-ab2e-e4b757992d9f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7f822ef0-9d45-454e-ab2e-e4b757992d9f"}]}, "flavor": {"id": "fa657b91-0ede-4606-b72a-342d514829df", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/fa657b91-0ede-4606-b72a-342d514829df"}]}, "created": "2025-12-10T20:05:30Z", "updated": "2025-12-10T20:05:35Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b46f5142-0287-480c-a9d8-fb9b8c0d3587"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b46f5142-0287-480c-a9d8-fb9b8c0d3587"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-10T20:05:35.000000", "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.448 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b46f5142-0287-480c-a9d8-fb9b8c0d3587 used request id req-ba6723d7-31f5-4062-a6cb-154e20deb135 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.449 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b46f5142-0287-480c-a9d8-fb9b8c0d3587', 'name': 'fvt_testing_server', 'flavor': {'id': 'fa657b91-0ede-4606-b72a-342d514829df', 'name': 'fvt_testing_flavor', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7f822ef0-9d45-454e-ab2e-e4b757992d9f'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.449 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.449 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.449 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.449 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.450 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.450 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.450 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.451 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.451 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.451 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.451 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T20:05:43.449824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.452 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T20:05:43.451244) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.481 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.481 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.481 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.504 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.505 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.505 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.530 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.531 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.531 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.532 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.533 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.533 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.533 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.533 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.534 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.535 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.535 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.535 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T20:05:43.533763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T20:05:43.535675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.541 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.545 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.549 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.549 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.549 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.550 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.550 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.551 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T20:05:43.550048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.551 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.551 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.552 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.552 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.552 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.552 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.552 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.553 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.553 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.554 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.554 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.554 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.555 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.555 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.556 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.556 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.556 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.556 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.557 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T20:05:43.552312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.558 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.558 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.558 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.558 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T20:05:43.554166) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.558 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.558 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T20:05:43.556223) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T20:05:43.558632) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.590 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/memory.usage volume: 48.76953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.618 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.639 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.639 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance b46f5142-0287-480c-a9d8-fb9b8c0d3587: ceilometer.compute.pollsters.NoVolumeException
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.639 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.640 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.640 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.640 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.640 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.640 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.641 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-10T20:05:43.640348) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.641 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.642 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes volume: 2640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.642 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.bytes volume: 1822 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.642 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.643 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.643 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.643 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.643 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.643 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.643 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T20:05:43.641861) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.644 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.644 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.644 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.644 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.645 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.645 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.645 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T20:05:43.643456) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.646 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.646 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.646 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.646 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.647 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.647 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.648 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.648 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.648 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.648 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.648 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.649 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.649 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.650 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.650 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T20:05:43.647051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.650 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.651 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.651 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T20:05:43.648451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.652 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T20:05:43.649947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.652 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T20:05:43.651332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.710 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.710 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.711 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.788 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.788 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.789 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.856 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.856 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.856 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.857 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.857 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.857 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.857 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.857 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.858 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/cpu volume: 42450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.858 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/cpu volume: 35200000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.858 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/cpu volume: 8090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.858 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.859 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.859 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.859 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.859 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.859 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 425951231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.859 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 63853652 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.860 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 49706577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.860 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.latency volume: 395037622 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.860 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.latency volume: 62323348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T20:05:43.857902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.860 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.latency volume: 49949275 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.860 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.read.latency volume: 284963908 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.861 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.861 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.read.latency volume: 1127681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.861 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.862 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T20:05:43.859664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.862 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.862 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.862 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.862 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.862 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.862 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.862 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.862 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.863 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.863 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.863 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.863 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.864 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.864 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.865 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.865 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.865 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T20:05:43.862255) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T20:05:43.865203) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.866 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.866 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.866 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.866 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.866 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.867 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.867 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.867 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.867 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.867 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.867 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.867 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.868 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.868 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.868 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.868 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.868 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.869 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.869 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.869 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.869 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.870 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.870 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.870 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.870 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.870 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.870 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.870 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.871 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T20:05:43.867825) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.871 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.871 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.871 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.871 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.871 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 816753194 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.871 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 10242364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.872 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.872 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.latency volume: 1570902949 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.872 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.latency volume: 11471208 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.872 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.872 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.873 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.873 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.873 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.874 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T20:05:43.870430) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.874 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.874 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.874 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.874 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.874 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.874 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.874 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.874 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.875 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.875 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.874 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T20:05:43.871639) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.875 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.875 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.875 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.876 14 DEBUG ceilometer.compute.pollsters [-] b46f5142-0287-480c-a9d8-fb9b8c0d3587/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.876 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.876 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.877 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.877 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.877 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.877 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.877 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes.delta volume: 252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.877 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.bytes.delta volume: 252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.876 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T20:05:43.874396) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.877 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.878 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.878 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T20:05:43.877281) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.878 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.878 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.878 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.878 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.878 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-10T20:05:43.878759) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:05:43.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:05:44 compute-0 nova_compute[189279]: 2025-12-10 20:05:44.389 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:46 compute-0 podman[245257]: 2025-12-10 20:05:46.095494983 +0000 UTC m=+0.065644697 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251210, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 10 20:05:46 compute-0 nova_compute[189279]: 2025-12-10 20:05:46.724 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:48 compute-0 nova_compute[189279]: 2025-12-10 20:05:48.552 189283 DEBUG oslo_concurrency.lockutils [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "b46f5142-0287-480c-a9d8-fb9b8c0d3587" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:48 compute-0 nova_compute[189279]: 2025-12-10 20:05:48.553 189283 DEBUG oslo_concurrency.lockutils [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "b46f5142-0287-480c-a9d8-fb9b8c0d3587" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:48 compute-0 nova_compute[189279]: 2025-12-10 20:05:48.554 189283 DEBUG oslo_concurrency.lockutils [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "b46f5142-0287-480c-a9d8-fb9b8c0d3587-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:48 compute-0 nova_compute[189279]: 2025-12-10 20:05:48.554 189283 DEBUG oslo_concurrency.lockutils [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "b46f5142-0287-480c-a9d8-fb9b8c0d3587-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:48 compute-0 nova_compute[189279]: 2025-12-10 20:05:48.555 189283 DEBUG oslo_concurrency.lockutils [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "b46f5142-0287-480c-a9d8-fb9b8c0d3587-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:48 compute-0 nova_compute[189279]: 2025-12-10 20:05:48.556 189283 INFO nova.compute.manager [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Terminating instance
Dec 10 20:05:48 compute-0 nova_compute[189279]: 2025-12-10 20:05:48.558 189283 DEBUG oslo_concurrency.lockutils [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "refresh_cache-b46f5142-0287-480c-a9d8-fb9b8c0d3587" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:05:48 compute-0 nova_compute[189279]: 2025-12-10 20:05:48.558 189283 DEBUG oslo_concurrency.lockutils [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquired lock "refresh_cache-b46f5142-0287-480c-a9d8-fb9b8c0d3587" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:05:48 compute-0 nova_compute[189279]: 2025-12-10 20:05:48.559 189283 DEBUG nova.network.neutron [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:05:49 compute-0 nova_compute[189279]: 2025-12-10 20:05:49.392 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:49 compute-0 nova_compute[189279]: 2025-12-10 20:05:49.428 189283 DEBUG nova.network.neutron [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 20:05:50 compute-0 nova_compute[189279]: 2025-12-10 20:05:50.508 189283 DEBUG nova.network.neutron [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:05:50 compute-0 nova_compute[189279]: 2025-12-10 20:05:50.528 189283 DEBUG oslo_concurrency.lockutils [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Releasing lock "refresh_cache-b46f5142-0287-480c-a9d8-fb9b8c0d3587" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:05:50 compute-0 nova_compute[189279]: 2025-12-10 20:05:50.529 189283 DEBUG nova.compute.manager [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:05:50 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec 10 20:05:50 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 16.264s CPU time.
Dec 10 20:05:50 compute-0 systemd-machined[155642]: Machine qemu-6-instance-00000006 terminated.
Dec 10 20:05:50 compute-0 nova_compute[189279]: 2025-12-10 20:05:50.826 189283 INFO nova.virt.libvirt.driver [-] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Instance destroyed successfully.
Dec 10 20:05:50 compute-0 nova_compute[189279]: 2025-12-10 20:05:50.827 189283 DEBUG nova.objects.instance [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'resources' on Instance uuid b46f5142-0287-480c-a9d8-fb9b8c0d3587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:05:50 compute-0 nova_compute[189279]: 2025-12-10 20:05:50.839 189283 INFO nova.virt.libvirt.driver [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Deleting instance files /var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587_del
Dec 10 20:05:50 compute-0 nova_compute[189279]: 2025-12-10 20:05:50.840 189283 INFO nova.virt.libvirt.driver [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Deletion of /var/lib/nova/instances/b46f5142-0287-480c-a9d8-fb9b8c0d3587_del complete
Dec 10 20:05:50 compute-0 nova_compute[189279]: 2025-12-10 20:05:50.888 189283 INFO nova.compute.manager [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Took 0.36 seconds to destroy the instance on the hypervisor.
Dec 10 20:05:50 compute-0 nova_compute[189279]: 2025-12-10 20:05:50.889 189283 DEBUG oslo.service.loopingcall [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:05:50 compute-0 nova_compute[189279]: 2025-12-10 20:05:50.889 189283 DEBUG nova.compute.manager [-] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:05:50 compute-0 nova_compute[189279]: 2025-12-10 20:05:50.889 189283 DEBUG nova.network.neutron [-] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:05:51 compute-0 nova_compute[189279]: 2025-12-10 20:05:51.422 189283 DEBUG nova.network.neutron [-] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 20:05:51 compute-0 nova_compute[189279]: 2025-12-10 20:05:51.440 189283 DEBUG nova.network.neutron [-] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:05:51 compute-0 nova_compute[189279]: 2025-12-10 20:05:51.457 189283 INFO nova.compute.manager [-] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Took 0.57 seconds to deallocate network for instance.
Dec 10 20:05:51 compute-0 nova_compute[189279]: 2025-12-10 20:05:51.501 189283 DEBUG oslo_concurrency.lockutils [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:51 compute-0 nova_compute[189279]: 2025-12-10 20:05:51.501 189283 DEBUG oslo_concurrency.lockutils [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:51 compute-0 nova_compute[189279]: 2025-12-10 20:05:51.614 189283 DEBUG nova.compute.provider_tree [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:05:51 compute-0 nova_compute[189279]: 2025-12-10 20:05:51.632 189283 DEBUG nova.scheduler.client.report [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:05:51 compute-0 nova_compute[189279]: 2025-12-10 20:05:51.652 189283 DEBUG oslo_concurrency.lockutils [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:51 compute-0 nova_compute[189279]: 2025-12-10 20:05:51.678 189283 INFO nova.scheduler.client.report [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Deleted allocations for instance b46f5142-0287-480c-a9d8-fb9b8c0d3587
Dec 10 20:05:51 compute-0 nova_compute[189279]: 2025-12-10 20:05:51.718 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:51 compute-0 nova_compute[189279]: 2025-12-10 20:05:51.745 189283 DEBUG oslo_concurrency.lockutils [None req-c058d5f9-c80e-4ef7-b772-32d50d0a455a 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "b46f5142-0287-480c-a9d8-fb9b8c0d3587" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:53 compute-0 nova_compute[189279]: 2025-12-10 20:05:53.513 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:05:53 compute-0 nova_compute[189279]: 2025-12-10 20:05:53.514 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:05:53 compute-0 nova_compute[189279]: 2025-12-10 20:05:53.514 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:05:54 compute-0 podman[245292]: 2025-12-10 20:05:54.103414026 +0000 UTC m=+0.087335512 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9-minimal, container_name=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 20:05:54 compute-0 nova_compute[189279]: 2025-12-10 20:05:54.396 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:54 compute-0 nova_compute[189279]: 2025-12-10 20:05:54.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:05:54 compute-0 nova_compute[189279]: 2025-12-10 20:05:54.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:05:54 compute-0 nova_compute[189279]: 2025-12-10 20:05:54.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:05:54 compute-0 nova_compute[189279]: 2025-12-10 20:05:54.573 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:54 compute-0 nova_compute[189279]: 2025-12-10 20:05:54.573 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:54 compute-0 nova_compute[189279]: 2025-12-10 20:05:54.574 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:54 compute-0 nova_compute[189279]: 2025-12-10 20:05:54.574 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:05:54 compute-0 nova_compute[189279]: 2025-12-10 20:05:54.825 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:54 compute-0 nova_compute[189279]: 2025-12-10 20:05:54.889 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:54 compute-0 nova_compute[189279]: 2025-12-10 20:05:54.890 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:54 compute-0 nova_compute[189279]: 2025-12-10 20:05:54.973 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:54 compute-0 nova_compute[189279]: 2025-12-10 20:05:54.975 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.039 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.040 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.107 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.115 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.175 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.176 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.237 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.239 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.298 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.299 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.391 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.749 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.750 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4857MB free_disk=72.32533264160156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.751 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:05:55 compute-0 nova_compute[189279]: 2025-12-10 20:05:55.751 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:05:56 compute-0 nova_compute[189279]: 2025-12-10 20:05:56.040 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:05:56 compute-0 nova_compute[189279]: 2025-12-10 20:05:56.042 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 26729739-a300-43fe-8678-5294ed41f6ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:05:56 compute-0 nova_compute[189279]: 2025-12-10 20:05:56.043 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:05:56 compute-0 nova_compute[189279]: 2025-12-10 20:05:56.043 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:05:56 compute-0 nova_compute[189279]: 2025-12-10 20:05:56.105 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:05:56 compute-0 podman[245337]: 2025-12-10 20:05:56.122857268 +0000 UTC m=+0.097473954 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 20:05:56 compute-0 nova_compute[189279]: 2025-12-10 20:05:56.124 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:05:56 compute-0 nova_compute[189279]: 2025-12-10 20:05:56.147 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:05:56 compute-0 nova_compute[189279]: 2025-12-10 20:05:56.147 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:05:56 compute-0 nova_compute[189279]: 2025-12-10 20:05:56.720 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:57 compute-0 nova_compute[189279]: 2025-12-10 20:05:57.149 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:05:57 compute-0 nova_compute[189279]: 2025-12-10 20:05:57.149 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:05:57 compute-0 nova_compute[189279]: 2025-12-10 20:05:57.149 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:05:57 compute-0 nova_compute[189279]: 2025-12-10 20:05:57.363 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:05:57 compute-0 nova_compute[189279]: 2025-12-10 20:05:57.364 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:05:57 compute-0 nova_compute[189279]: 2025-12-10 20:05:57.364 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:05:57 compute-0 nova_compute[189279]: 2025-12-10 20:05:57.365 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12986b74-7b15-4ff4-9019-081950660d4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:05:58 compute-0 nova_compute[189279]: 2025-12-10 20:05:58.580 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updating instance_info_cache with network_info: [{"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:05:58 compute-0 nova_compute[189279]: 2025-12-10 20:05:58.593 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:05:58 compute-0 nova_compute[189279]: 2025-12-10 20:05:58.594 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:05:58 compute-0 nova_compute[189279]: 2025-12-10 20:05:58.594 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:05:58 compute-0 nova_compute[189279]: 2025-12-10 20:05:58.594 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:05:58 compute-0 nova_compute[189279]: 2025-12-10 20:05:58.594 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:05:59 compute-0 nova_compute[189279]: 2025-12-10 20:05:59.400 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:05:59 compute-0 podman[203484]: time="2025-12-10T20:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:05:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:05:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec 10 20:06:01 compute-0 openstack_network_exporter[205632]: ERROR   20:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:06:01 compute-0 openstack_network_exporter[205632]: ERROR   20:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:06:01 compute-0 openstack_network_exporter[205632]: ERROR   20:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:06:01 compute-0 openstack_network_exporter[205632]: ERROR   20:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:06:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:06:01 compute-0 openstack_network_exporter[205632]: ERROR   20:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:06:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:06:01 compute-0 nova_compute[189279]: 2025-12-10 20:06:01.722 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:02 compute-0 podman[245360]: 2025-12-10 20:06:02.105174103 +0000 UTC m=+0.075624825 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, container_name=kepler, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 10 20:06:02 compute-0 podman[245358]: 2025-12-10 20:06:02.109475339 +0000 UTC m=+0.087245839 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 10 20:06:02 compute-0 podman[245359]: 2025-12-10 20:06:02.115202453 +0000 UTC m=+0.088755050 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:06:04 compute-0 nova_compute[189279]: 2025-12-10 20:06:04.402 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:05 compute-0 nova_compute[189279]: 2025-12-10 20:06:05.823 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765397150.821517, b46f5142-0287-480c-a9d8-fb9b8c0d3587 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:06:05 compute-0 nova_compute[189279]: 2025-12-10 20:06:05.824 189283 INFO nova.compute.manager [-] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] VM Stopped (Lifecycle Event)
Dec 10 20:06:05 compute-0 nova_compute[189279]: 2025-12-10 20:06:05.842 189283 DEBUG nova.compute.manager [None req-b9139e0f-9d35-44e9-83c9-375c653bcb03 - - - - - -] [instance: b46f5142-0287-480c-a9d8-fb9b8c0d3587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:06:06 compute-0 nova_compute[189279]: 2025-12-10 20:06:06.724 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:08 compute-0 podman[245412]: 2025-12-10 20:06:08.092213184 +0000 UTC m=+0.075278827 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Dec 10 20:06:08 compute-0 podman[245413]: 2025-12-10 20:06:08.092748948 +0000 UTC m=+0.069551463 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:06:09 compute-0 nova_compute[189279]: 2025-12-10 20:06:09.406 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:11 compute-0 nova_compute[189279]: 2025-12-10 20:06:11.726 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:12 compute-0 podman[245454]: 2025-12-10 20:06:12.179145433 +0000 UTC m=+0.157297034 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 10 20:06:14 compute-0 nova_compute[189279]: 2025-12-10 20:06:14.409 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:15 compute-0 sshd-session[244760]: Received disconnect from 38.102.83.132 port 41222:11: disconnected by user
Dec 10 20:06:15 compute-0 sshd-session[244760]: Disconnected from user zuul 38.102.83.132 port 41222
Dec 10 20:06:15 compute-0 sshd-session[244757]: pam_unix(sshd:session): session closed for user zuul
Dec 10 20:06:15 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Dec 10 20:06:15 compute-0 systemd-logind[789]: Session 30 logged out. Waiting for processes to exit.
Dec 10 20:06:15 compute-0 systemd-logind[789]: Removed session 30.
Dec 10 20:06:16 compute-0 nova_compute[189279]: 2025-12-10 20:06:16.729 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:16 compute-0 podman[245478]: 2025-12-10 20:06:16.880306416 +0000 UTC m=+0.113521875 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 10 20:06:19 compute-0 nova_compute[189279]: 2025-12-10 20:06:19.412 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:21 compute-0 nova_compute[189279]: 2025-12-10 20:06:21.731 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:06:23.384 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:06:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:06:23.384 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:06:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:06:23.385 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:06:24 compute-0 nova_compute[189279]: 2025-12-10 20:06:24.415 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:25 compute-0 podman[245497]: 2025-12-10 20:06:25.104462187 +0000 UTC m=+0.081640987 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, version=9.6)
Dec 10 20:06:25 compute-0 sshd-session[245518]: Accepted publickey for zuul from 38.102.83.132 port 56008 ssh2: RSA SHA256:L/SCRhDD2hlgP35vi6MGkgCM80jHQm/zqk6LaU3Vz9U
Dec 10 20:06:25 compute-0 systemd-logind[789]: New session 31 of user zuul.
Dec 10 20:06:25 compute-0 systemd[1]: Started Session 31 of User zuul.
Dec 10 20:06:25 compute-0 sshd-session[245518]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 20:06:26 compute-0 sudo[245705]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tthjuworwcnqhlmfnbjupwkxysrbydce ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765397185.609993-59323-84845527382358/AnsiballZ_command.py'
Dec 10 20:06:26 compute-0 sudo[245705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 20:06:26 compute-0 podman[245670]: 2025-12-10 20:06:26.295246345 +0000 UTC m=+0.113241128 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 20:06:26 compute-0 python3[245718]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 20:06:26 compute-0 sudo[245705]: pam_unix(sudo:session): session closed for user root
Dec 10 20:06:26 compute-0 nova_compute[189279]: 2025-12-10 20:06:26.734 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:29 compute-0 nova_compute[189279]: 2025-12-10 20:06:29.418 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:29 compute-0 podman[203484]: time="2025-12-10T20:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:06:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:06:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Dec 10 20:06:31 compute-0 openstack_network_exporter[205632]: ERROR   20:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:06:31 compute-0 openstack_network_exporter[205632]: ERROR   20:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:06:31 compute-0 openstack_network_exporter[205632]: ERROR   20:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:06:31 compute-0 openstack_network_exporter[205632]: ERROR   20:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:06:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:06:31 compute-0 openstack_network_exporter[205632]: ERROR   20:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:06:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:06:31 compute-0 nova_compute[189279]: 2025-12-10 20:06:31.736 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:33 compute-0 podman[245762]: 2025-12-10 20:06:33.113237947 +0000 UTC m=+0.079082828 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, name=ubi9, vendor=Red Hat, Inc., release-0.7.12=, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Dec 10 20:06:33 compute-0 podman[245760]: 2025-12-10 20:06:33.131757345 +0000 UTC m=+0.090061393 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:06:33 compute-0 podman[245761]: 2025-12-10 20:06:33.154003444 +0000 UTC m=+0.113245688 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm)
Dec 10 20:06:34 compute-0 sudo[245991]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmsjigltbmfuyglshomkcqeehgefewop ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765397193.6356592-59486-156622082160799/AnsiballZ_command.py'
Dec 10 20:06:34 compute-0 sudo[245991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 20:06:34 compute-0 python3[245993]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 20:06:34 compute-0 sudo[245991]: pam_unix(sudo:session): session closed for user root
Dec 10 20:06:34 compute-0 nova_compute[189279]: 2025-12-10 20:06:34.422 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:36 compute-0 nova_compute[189279]: 2025-12-10 20:06:36.740 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:39 compute-0 podman[246032]: 2025-12-10 20:06:39.109449414 +0000 UTC m=+0.079541991 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:06:39 compute-0 podman[246031]: 2025-12-10 20:06:39.139737409 +0000 UTC m=+0.115203771 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 10 20:06:39 compute-0 nova_compute[189279]: 2025-12-10 20:06:39.426 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:41 compute-0 nova_compute[189279]: 2025-12-10 20:06:41.741 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:43 compute-0 podman[246090]: 2025-12-10 20:06:43.139452581 +0000 UTC m=+0.115793987 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 10 20:06:43 compute-0 sudo[246272]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlgdzcmirdhicxojwkiegeqpqzuzxtwy ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765397203.0266814-59640-162854204155928/AnsiballZ_command.py'
Dec 10 20:06:43 compute-0 sudo[246272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 20:06:43 compute-0 python3[246274]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 20:06:43 compute-0 sudo[246272]: pam_unix(sudo:session): session closed for user root
Dec 10 20:06:44 compute-0 nova_compute[189279]: 2025-12-10 20:06:44.428 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:46 compute-0 nova_compute[189279]: 2025-12-10 20:06:46.744 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:47 compute-0 podman[246313]: 2025-12-10 20:06:47.095867549 +0000 UTC m=+0.066445330 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 10 20:06:49 compute-0 nova_compute[189279]: 2025-12-10 20:06:49.432 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:51 compute-0 nova_compute[189279]: 2025-12-10 20:06:51.747 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:53 compute-0 nova_compute[189279]: 2025-12-10 20:06:53.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:06:54 compute-0 nova_compute[189279]: 2025-12-10 20:06:54.436 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:54 compute-0 nova_compute[189279]: 2025-12-10 20:06:54.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.482 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.486 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.530 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.530 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.531 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.531 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.601 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.667 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.668 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.727 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.729 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.790 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.791 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.852 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.859 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.915 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.916 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.974 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:06:55 compute-0 nova_compute[189279]: 2025-12-10 20:06:55.975 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.040 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.041 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.108 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:06:56 compute-0 podman[246352]: 2025-12-10 20:06:56.119406104 +0000 UTC m=+0.098637314 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, release=1755695350, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.443 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.445 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4879MB free_disk=72.32538986206055GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.445 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.446 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.528 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.529 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 26729739-a300-43fe-8678-5294ed41f6ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.529 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.530 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.595 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.615 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.617 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.617 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:06:56 compute-0 nova_compute[189279]: 2025-12-10 20:06:56.748 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:57 compute-0 podman[246378]: 2025-12-10 20:06:57.08188892 +0000 UTC m=+0.063206151 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 20:06:57 compute-0 nova_compute[189279]: 2025-12-10 20:06:57.618 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:06:57 compute-0 nova_compute[189279]: 2025-12-10 20:06:57.619 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:06:58 compute-0 nova_compute[189279]: 2025-12-10 20:06:58.459 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:06:58 compute-0 nova_compute[189279]: 2025-12-10 20:06:58.459 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:06:58 compute-0 nova_compute[189279]: 2025-12-10 20:06:58.459 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:06:58 compute-0 sudo[246574]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxkeftmlnvlacgtukwmauptehohehkjc ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765397217.9730434-59860-214614334406891/AnsiballZ_command.py'
Dec 10 20:06:58 compute-0 sudo[246574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 20:06:58 compute-0 python3[246576]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 10 20:06:58 compute-0 sudo[246574]: pam_unix(sudo:session): session closed for user root
Dec 10 20:06:59 compute-0 nova_compute[189279]: 2025-12-10 20:06:59.440 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:06:59 compute-0 podman[203484]: time="2025-12-10T20:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:06:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:06:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec 10 20:07:00 compute-0 nova_compute[189279]: 2025-12-10 20:07:00.623 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Updating instance_info_cache with network_info: [{"id": "0785494f-981a-4c23-8e42-a15d0c582bfb", "address": "fa:16:3e:0b:ad:37", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0785494f-98", "ovs_interfaceid": "0785494f-981a-4c23-8e42-a15d0c582bfb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:07:00 compute-0 nova_compute[189279]: 2025-12-10 20:07:00.702 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:07:00 compute-0 nova_compute[189279]: 2025-12-10 20:07:00.702 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:07:00 compute-0 nova_compute[189279]: 2025-12-10 20:07:00.703 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:07:00 compute-0 nova_compute[189279]: 2025-12-10 20:07:00.703 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:07:00 compute-0 nova_compute[189279]: 2025-12-10 20:07:00.703 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:07:01 compute-0 openstack_network_exporter[205632]: ERROR   20:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:07:01 compute-0 openstack_network_exporter[205632]: ERROR   20:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:07:01 compute-0 openstack_network_exporter[205632]: ERROR   20:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:07:01 compute-0 openstack_network_exporter[205632]: ERROR   20:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:07:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:07:01 compute-0 openstack_network_exporter[205632]: ERROR   20:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:07:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:07:01 compute-0 nova_compute[189279]: 2025-12-10 20:07:01.752 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:04 compute-0 podman[246615]: 2025-12-10 20:07:04.106005312 +0000 UTC m=+0.074395513 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 10 20:07:04 compute-0 podman[246616]: 2025-12-10 20:07:04.115155298 +0000 UTC m=+0.081730509 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 10 20:07:04 compute-0 podman[246617]: 2025-12-10 20:07:04.145342151 +0000 UTC m=+0.107825143 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=base rhel9, vcs-type=git, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, managed_by=edpm_ansible, container_name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, name=ubi9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vendor=Red Hat, Inc., io.buildah.version=1.29.0, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 10 20:07:04 compute-0 nova_compute[189279]: 2025-12-10 20:07:04.442 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:04 compute-0 nova_compute[189279]: 2025-12-10 20:07:04.567 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:07:06 compute-0 nova_compute[189279]: 2025-12-10 20:07:06.754 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:09 compute-0 nova_compute[189279]: 2025-12-10 20:07:09.447 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:10 compute-0 podman[246667]: 2025-12-10 20:07:10.103747449 +0000 UTC m=+0.076819178 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Dec 10 20:07:10 compute-0 podman[246668]: 2025-12-10 20:07:10.119429291 +0000 UTC m=+0.090129696 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 20:07:11 compute-0 nova_compute[189279]: 2025-12-10 20:07:11.756 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:14 compute-0 podman[246711]: 2025-12-10 20:07:14.155092841 +0000 UTC m=+0.130309577 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:07:14 compute-0 nova_compute[189279]: 2025-12-10 20:07:14.450 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:16 compute-0 nova_compute[189279]: 2025-12-10 20:07:16.758 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:18 compute-0 podman[246737]: 2025-12-10 20:07:18.113298417 +0000 UTC m=+0.076893140 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251210, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 10 20:07:19 compute-0 nova_compute[189279]: 2025-12-10 20:07:19.454 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:21 compute-0 nova_compute[189279]: 2025-12-10 20:07:21.761 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:07:23.386 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:07:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:07:23.387 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:07:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:07:23.388 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:07:24 compute-0 nova_compute[189279]: 2025-12-10 20:07:24.457 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:26 compute-0 nova_compute[189279]: 2025-12-10 20:07:26.765 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:27 compute-0 podman[246759]: 2025-12-10 20:07:27.093204169 +0000 UTC m=+0.066507010 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 10 20:07:27 compute-0 podman[246779]: 2025-12-10 20:07:27.207393262 +0000 UTC m=+0.087066563 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:07:29 compute-0 nova_compute[189279]: 2025-12-10 20:07:29.461 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:29 compute-0 podman[203484]: time="2025-12-10T20:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:07:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:07:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec 10 20:07:31 compute-0 openstack_network_exporter[205632]: ERROR   20:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:07:31 compute-0 openstack_network_exporter[205632]: ERROR   20:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:07:31 compute-0 openstack_network_exporter[205632]: ERROR   20:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:07:31 compute-0 openstack_network_exporter[205632]: ERROR   20:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:07:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:07:31 compute-0 openstack_network_exporter[205632]: ERROR   20:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:07:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:07:31 compute-0 nova_compute[189279]: 2025-12-10 20:07:31.767 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:34 compute-0 nova_compute[189279]: 2025-12-10 20:07:34.465 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:35 compute-0 podman[246804]: 2025-12-10 20:07:35.096261767 +0000 UTC m=+0.075444792 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:07:35 compute-0 podman[246805]: 2025-12-10 20:07:35.112505195 +0000 UTC m=+0.079335647 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 10 20:07:35 compute-0 podman[246806]: 2025-12-10 20:07:35.125279539 +0000 UTC m=+0.094443535 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler)
Dec 10 20:07:36 compute-0 nova_compute[189279]: 2025-12-10 20:07:36.770 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:37 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 10 20:07:39 compute-0 nova_compute[189279]: 2025-12-10 20:07:39.470 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:41 compute-0 podman[246861]: 2025-12-10 20:07:41.118097675 +0000 UTC m=+0.087986080 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:07:41 compute-0 podman[246860]: 2025-12-10 20:07:41.180384442 +0000 UTC m=+0.141592844 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 10 20:07:41 compute-0 nova_compute[189279]: 2025-12-10 20:07:41.772 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.177 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.177 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.177 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.180 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.183 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.183 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.183 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.183 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.183 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.188 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '12986b74-7b15-4ff4-9019-081950660d4b', 'name': 'test_0', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.194 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '26729739-a300-43fe-8678-5294ed41f6ed', 'name': 'vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7', 'flavor': {'id': '0fc2e5b1-b522-4c52-bdef-97db09e458e4', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '06e6231d-0a77-4b09-acb3-e7faf5a777be'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fe518ea62a94467e823b2b1046c57a2e', 'user_id': '2143e69e49fd49db99c8737c973c1ea5', 'hostId': 'dca9a431889d973c08c01d570b945eeb86205897a644dc696ca35852', 'status': 'active', 'metadata': {'metering.server_group': '9d7a68be-d216-4b06-b611-878d356c6d68'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.195 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.196 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.198 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.198 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.198 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T20:07:42.195961) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T20:07:42.199033) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.239 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.240 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.240 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.271 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.272 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.272 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.273 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.273 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.274 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.274 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.275 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.275 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.275 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.275 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.276 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T20:07:42.274263) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T20:07:42.276060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.282 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.289 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.290 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.291 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.291 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.292 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.292 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.293 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T20:07:42.292548) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.294 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.295 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.295 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.295 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.296 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.296 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.297 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.298 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T20:07:42.296312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.299 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.300 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T20:07:42.300133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.301 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.301 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.303 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.304 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T20:07:42.303801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.305 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.306 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.307 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T20:07:42.307218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.336 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/memory.usage volume: 48.76953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.366 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.367 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.367 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.367 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.367 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.367 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.368 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.368 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes volume: 2640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.368 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.bytes volume: 1822 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T20:07:42.367974) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.369 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.369 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.369 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.369 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.369 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.370 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.370 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.370 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T20:07:42.369938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.371 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.371 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.371 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.372 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.372 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.372 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.373 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.373 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.373 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.373 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.374 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T20:07:42.373094) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.374 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.374 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.375 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.375 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T20:07:42.374863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.376 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.376 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.376 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.377 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T20:07:42.376524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.378 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.378 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.378 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.378 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.378 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.378 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T20:07:42.378662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.472 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.472 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.473 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.564 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.565 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.565 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.566 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.566 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/cpu volume: 43730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.567 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/cpu volume: 36470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T20:07:42.566665) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.568 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.568 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 425951231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.568 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 63853652 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.569 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.latency volume: 49706577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.569 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.latency volume: 395037622 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.569 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.latency volume: 62323348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.570 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.latency volume: 49949275 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.570 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T20:07:42.568402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.571 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.574 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.577 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.577 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.577 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.578 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.578 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.580 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T20:07:42.571398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.581 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T20:07:42.580999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.581 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.582 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.582 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.582 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.583 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.583 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.584 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.584 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.585 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.585 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.586 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.586 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T20:07:42.584296) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.586 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.587 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.588 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.588 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.589 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T20:07:42.587889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T20:07:42.590006) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.591 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 816753194 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.591 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 10242364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.592 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.592 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.latency volume: 1570902949 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.592 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.latency volume: 11471208 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.593 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.593 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T20:07:42.594239) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.594 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.594 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.595 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.595 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.596 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.596 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.596 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.597 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.597 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.597 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T20:07:42.597524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.597 14 DEBUG ceilometer.compute.pollsters [-] 12986b74-7b15-4ff4-9019-081950660d4b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.598 14 DEBUG ceilometer.compute.pollsters [-] 26729739-a300-43fe-8678-5294ed41f6ed/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.599 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.599 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:07:42.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:07:44 compute-0 nova_compute[189279]: 2025-12-10 20:07:44.475 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:44 compute-0 podman[246905]: 2025-12-10 20:07:44.803473724 +0000 UTC m=+0.122144219 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:07:46 compute-0 nova_compute[189279]: 2025-12-10 20:07:46.776 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:49 compute-0 podman[246930]: 2025-12-10 20:07:49.124354786 +0000 UTC m=+0.108130002 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:07:49 compute-0 nova_compute[189279]: 2025-12-10 20:07:49.484 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:51 compute-0 nova_compute[189279]: 2025-12-10 20:07:51.778 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:54 compute-0 nova_compute[189279]: 2025-12-10 20:07:54.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:07:54 compute-0 nova_compute[189279]: 2025-12-10 20:07:54.490 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:55 compute-0 nova_compute[189279]: 2025-12-10 20:07:55.485 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:07:55 compute-0 nova_compute[189279]: 2025-12-10 20:07:55.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.518 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.519 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.519 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.519 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.601 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.659 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.661 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.717 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.718 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.776 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.778 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.799 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.872 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.881 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.963 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:07:56 compute-0 nova_compute[189279]: 2025-12-10 20:07:56.964 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.035 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.036 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.112 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.114 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.171 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.524 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.526 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4875MB free_disk=72.32538986206055GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.526 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.527 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.615 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.615 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 26729739-a300-43fe-8678-5294ed41f6ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.616 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.616 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.718 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.731 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.733 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:07:57 compute-0 nova_compute[189279]: 2025-12-10 20:07:57.733 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:07:58 compute-0 podman[246975]: 2025-12-10 20:07:58.102949538 +0000 UTC m=+0.068029473 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 10 20:07:58 compute-0 podman[246974]: 2025-12-10 20:07:58.123246074 +0000 UTC m=+0.087233869 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:07:58 compute-0 sshd-session[245521]: Received disconnect from 38.102.83.132 port 56008:11: disconnected by user
Dec 10 20:07:58 compute-0 sshd-session[245521]: Disconnected from user zuul 38.102.83.132 port 56008
Dec 10 20:07:58 compute-0 sshd-session[245518]: pam_unix(sshd:session): session closed for user zuul
Dec 10 20:07:58 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Dec 10 20:07:58 compute-0 systemd[1]: session-31.scope: Consumed 3.780s CPU time.
Dec 10 20:07:58 compute-0 systemd-logind[789]: Session 31 logged out. Waiting for processes to exit.
Dec 10 20:07:58 compute-0 systemd-logind[789]: Removed session 31.
Dec 10 20:07:58 compute-0 nova_compute[189279]: 2025-12-10 20:07:58.733 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:07:58 compute-0 nova_compute[189279]: 2025-12-10 20:07:58.734 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:07:58 compute-0 nova_compute[189279]: 2025-12-10 20:07:58.734 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:07:59 compute-0 nova_compute[189279]: 2025-12-10 20:07:59.475 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:07:59 compute-0 nova_compute[189279]: 2025-12-10 20:07:59.476 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:07:59 compute-0 nova_compute[189279]: 2025-12-10 20:07:59.476 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:07:59 compute-0 nova_compute[189279]: 2025-12-10 20:07:59.477 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12986b74-7b15-4ff4-9019-081950660d4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:07:59 compute-0 nova_compute[189279]: 2025-12-10 20:07:59.494 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:07:59 compute-0 podman[203484]: time="2025-12-10T20:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:07:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:07:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Dec 10 20:08:01 compute-0 openstack_network_exporter[205632]: ERROR   20:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:08:01 compute-0 openstack_network_exporter[205632]: ERROR   20:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:08:01 compute-0 openstack_network_exporter[205632]: ERROR   20:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:08:01 compute-0 openstack_network_exporter[205632]: ERROR   20:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:08:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:08:01 compute-0 openstack_network_exporter[205632]: ERROR   20:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:08:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:08:01 compute-0 nova_compute[189279]: 2025-12-10 20:08:01.508 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updating instance_info_cache with network_info: [{"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:08:01 compute-0 nova_compute[189279]: 2025-12-10 20:08:01.523 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-12986b74-7b15-4ff4-9019-081950660d4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:08:01 compute-0 nova_compute[189279]: 2025-12-10 20:08:01.523 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:08:01 compute-0 nova_compute[189279]: 2025-12-10 20:08:01.523 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:08:01 compute-0 nova_compute[189279]: 2025-12-10 20:08:01.523 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:08:01 compute-0 nova_compute[189279]: 2025-12-10 20:08:01.524 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:08:01 compute-0 nova_compute[189279]: 2025-12-10 20:08:01.524 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:08:01 compute-0 nova_compute[189279]: 2025-12-10 20:08:01.524 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:08:01 compute-0 nova_compute[189279]: 2025-12-10 20:08:01.783 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:04 compute-0 nova_compute[189279]: 2025-12-10 20:08:04.497 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:06 compute-0 podman[247018]: 2025-12-10 20:08:06.127323638 +0000 UTC m=+0.093018936 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 10 20:08:06 compute-0 podman[247017]: 2025-12-10 20:08:06.139160636 +0000 UTC m=+0.095002049 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 10 20:08:06 compute-0 podman[247019]: 2025-12-10 20:08:06.150381038 +0000 UTC m=+0.100303641 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, name=ubi9, release=1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, vcs-type=git)
Dec 10 20:08:06 compute-0 nova_compute[189279]: 2025-12-10 20:08:06.786 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:09 compute-0 nova_compute[189279]: 2025-12-10 20:08:09.503 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:11 compute-0 nova_compute[189279]: 2025-12-10 20:08:11.788 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:12 compute-0 podman[247073]: 2025-12-10 20:08:12.150176664 +0000 UTC m=+0.104390452 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 20:08:12 compute-0 podman[247072]: 2025-12-10 20:08:12.173057 +0000 UTC m=+0.133640740 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:08:14 compute-0 nova_compute[189279]: 2025-12-10 20:08:14.508 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:15 compute-0 podman[247114]: 2025-12-10 20:08:15.163741375 +0000 UTC m=+0.133317742 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:08:16 compute-0 nova_compute[189279]: 2025-12-10 20:08:16.794 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:19 compute-0 nova_compute[189279]: 2025-12-10 20:08:19.514 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:20 compute-0 podman[247141]: 2025-12-10 20:08:20.16343166 +0000 UTC m=+0.125753527 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec 10 20:08:21 compute-0 nova_compute[189279]: 2025-12-10 20:08:21.799 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:23.388 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:08:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:23.389 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:08:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:23.391 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:08:24 compute-0 nova_compute[189279]: 2025-12-10 20:08:24.520 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:26 compute-0 nova_compute[189279]: 2025-12-10 20:08:26.800 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:29 compute-0 podman[247161]: 2025-12-10 20:08:29.130053275 +0000 UTC m=+0.110903737 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 20:08:29 compute-0 podman[247162]: 2025-12-10 20:08:29.160367631 +0000 UTC m=+0.134461341 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc.)
Dec 10 20:08:29 compute-0 nova_compute[189279]: 2025-12-10 20:08:29.523 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:29 compute-0 podman[203484]: time="2025-12-10T20:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:08:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:08:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Dec 10 20:08:31 compute-0 openstack_network_exporter[205632]: ERROR   20:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:08:31 compute-0 openstack_network_exporter[205632]: ERROR   20:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:08:31 compute-0 openstack_network_exporter[205632]: ERROR   20:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:08:31 compute-0 openstack_network_exporter[205632]: ERROR   20:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:08:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:08:31 compute-0 openstack_network_exporter[205632]: ERROR   20:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:08:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:08:31 compute-0 nova_compute[189279]: 2025-12-10 20:08:31.802 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:34 compute-0 nova_compute[189279]: 2025-12-10 20:08:34.526 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:36 compute-0 nova_compute[189279]: 2025-12-10 20:08:36.804 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:37 compute-0 podman[247205]: 2025-12-10 20:08:37.11409053 +0000 UTC m=+0.092327197 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 10 20:08:37 compute-0 podman[247206]: 2025-12-10 20:08:37.138356543 +0000 UTC m=+0.113373793 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 10 20:08:37 compute-0 podman[247207]: 2025-12-10 20:08:37.138367044 +0000 UTC m=+0.091679849 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release-0.7.12=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Dec 10 20:08:39 compute-0 nova_compute[189279]: 2025-12-10 20:08:39.530 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:41 compute-0 nova_compute[189279]: 2025-12-10 20:08:41.806 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:43 compute-0 podman[247259]: 2025-12-10 20:08:43.117982977 +0000 UTC m=+0.097609209 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:08:43 compute-0 podman[247260]: 2025-12-10 20:08:43.138689145 +0000 UTC m=+0.109803718 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:08:44 compute-0 nova_compute[189279]: 2025-12-10 20:08:44.534 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:46 compute-0 podman[247301]: 2025-12-10 20:08:46.155463632 +0000 UTC m=+0.133578217 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec 10 20:08:46 compute-0 nova_compute[189279]: 2025-12-10 20:08:46.810 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:49 compute-0 nova_compute[189279]: 2025-12-10 20:08:49.539 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:51 compute-0 podman[247327]: 2025-12-10 20:08:51.141949393 +0000 UTC m=+0.109533360 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 10 20:08:51 compute-0 nova_compute[189279]: 2025-12-10 20:08:51.814 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:54 compute-0 nova_compute[189279]: 2025-12-10 20:08:54.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:08:54 compute-0 nova_compute[189279]: 2025-12-10 20:08:54.545 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:55 compute-0 nova_compute[189279]: 2025-12-10 20:08:55.484 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:08:56 compute-0 nova_compute[189279]: 2025-12-10 20:08:56.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:08:56 compute-0 nova_compute[189279]: 2025-12-10 20:08:56.817 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:58 compute-0 nova_compute[189279]: 2025-12-10 20:08:58.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:08:58 compute-0 nova_compute[189279]: 2025-12-10 20:08:58.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:08:58 compute-0 nova_compute[189279]: 2025-12-10 20:08:58.796 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:08:58 compute-0 nova_compute[189279]: 2025-12-10 20:08:58.796 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:08:58 compute-0 nova_compute[189279]: 2025-12-10 20:08:58.797 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.215 189283 DEBUG oslo_concurrency.lockutils [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "26729739-a300-43fe-8678-5294ed41f6ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.216 189283 DEBUG oslo_concurrency.lockutils [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.216 189283 DEBUG oslo_concurrency.lockutils [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "26729739-a300-43fe-8678-5294ed41f6ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.216 189283 DEBUG oslo_concurrency.lockutils [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.217 189283 DEBUG oslo_concurrency.lockutils [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.218 189283 INFO nova.compute.manager [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Terminating instance
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.219 189283 DEBUG nova.compute.manager [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:08:59 compute-0 kernel: tap0785494f-98 (unregistering): left promiscuous mode
Dec 10 20:08:59 compute-0 NetworkManager[56238]: <info>  [1765397339.2777] device (tap0785494f-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:08:59 compute-0 ovn_controller[97701]: 2025-12-10T20:08:59Z|00074|binding|INFO|Releasing lport 0785494f-981a-4c23-8e42-a15d0c582bfb from this chassis (sb_readonly=0)
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.295 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:59 compute-0 ovn_controller[97701]: 2025-12-10T20:08:59Z|00075|binding|INFO|Setting lport 0785494f-981a-4c23-8e42-a15d0c582bfb down in Southbound
Dec 10 20:08:59 compute-0 ovn_controller[97701]: 2025-12-10T20:08:59Z|00076|binding|INFO|Removing iface tap0785494f-98 ovn-installed in OVS
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.301 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.305 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:ad:37 192.168.0.54'], port_security=['fa:16:3e:0b:ad:37 192.168.0.54'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-pjemjxzxegr5-tu426txpq63m-ebreuwdsmaq4-port-lzq7unw5gr5p', 'neutron:cidrs': '192.168.0.54/24', 'neutron:device_id': '26729739-a300-43fe-8678-5294ed41f6ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-pjemjxzxegr5-tu426txpq63m-ebreuwdsmaq4-port-lzq7unw5gr5p', 'neutron:project_id': 'fe518ea62a94467e823b2b1046c57a2e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbfca0ae-785e-46d9-973d-467188fc7f6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.199', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a76816-4d6e-46b1-bfa0-919a54b6e056, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=0785494f-981a-4c23-8e42-a15d0c582bfb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.308 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 0785494f-981a-4c23-8e42-a15d0c582bfb in datapath e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 unbound from our chassis
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.312 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e55a1ff5-f742-4bad-ae9c-2f6d4795fa29
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.324 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:59 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec 10 20:08:59 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 1min 28.898s CPU time.
Dec 10 20:08:59 compute-0 systemd-machined[155642]: Machine qemu-5-instance-00000005 terminated.
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.344 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[88c34e10-7bbf-46e4-b40f-656795da77ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.380 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7b6fec-5983-4f29-b4be-29caade2c155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.385 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[c298ed9a-d424-4d6d-b335-bbac22b874c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:08:59 compute-0 podman[247348]: 2025-12-10 20:08:59.398061366 +0000 UTC m=+0.090626491 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.413 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[1d5bc39c-3e13-4755-aa1a-9e2281f3848d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:08:59 compute-0 podman[247351]: 2025-12-10 20:08:59.421309232 +0000 UTC m=+0.098426151 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, name=ubi9-minimal, release=1755695350, vcs-type=git, version=9.6, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.430 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fcb876-36ae-49b5-9abc-910267096084]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape55a1ff5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:f6:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 23, 'rx_bytes': 658, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 23, 'rx_bytes': 658, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 372629, 'reachable_time': 29055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247401, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.447 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[283f4c93-fbe3-41bc-bb17-b2e643ac13bc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372645, 'tstamp': 372645}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247404, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tape55a1ff5-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 372649, 'tstamp': 372649}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247404, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.449 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape55a1ff5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.451 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:59 compute-0 rsyslogd[236537]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.458 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:59 compute-0 rsyslogd[236537]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.459 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape55a1ff5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.460 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.460 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape55a1ff5-f0, col_values=(('external_ids', {'iface-id': 'f70c9140-d0bb-473b-94ef-0336fe52cbb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.461 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.506 189283 INFO nova.virt.libvirt.driver [-] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Instance destroyed successfully.
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.507 189283 DEBUG nova.objects.instance [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'resources' on Instance uuid 26729739-a300-43fe-8678-5294ed41f6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.512 189283 DEBUG nova.compute.manager [req-fa0122d9-ac6b-4243-8bab-d7a6c62be0de req-be2b964a-ba2d-48f5-ac93-f51865981be5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Received event network-vif-unplugged-0785494f-981a-4c23-8e42-a15d0c582bfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.512 189283 DEBUG oslo_concurrency.lockutils [req-fa0122d9-ac6b-4243-8bab-d7a6c62be0de req-be2b964a-ba2d-48f5-ac93-f51865981be5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "26729739-a300-43fe-8678-5294ed41f6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.512 189283 DEBUG oslo_concurrency.lockutils [req-fa0122d9-ac6b-4243-8bab-d7a6c62be0de req-be2b964a-ba2d-48f5-ac93-f51865981be5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.513 189283 DEBUG oslo_concurrency.lockutils [req-fa0122d9-ac6b-4243-8bab-d7a6c62be0de req-be2b964a-ba2d-48f5-ac93-f51865981be5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.513 189283 DEBUG nova.compute.manager [req-fa0122d9-ac6b-4243-8bab-d7a6c62be0de req-be2b964a-ba2d-48f5-ac93-f51865981be5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] No waiting events found dispatching network-vif-unplugged-0785494f-981a-4c23-8e42-a15d0c582bfb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.513 189283 DEBUG nova.compute.manager [req-fa0122d9-ac6b-4243-8bab-d7a6c62be0de req-be2b964a-ba2d-48f5-ac93-f51865981be5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Received event network-vif-unplugged-0785494f-981a-4c23-8e42-a15d0c582bfb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.519 189283 DEBUG nova.virt.libvirt.vif [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:01:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xzxegr5-tu426txpq63m-ebreuwdsmaq4-vnf-julcvdsawbw7',id=5,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-10T20:01:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='9d7a68be-d216-4b06-b611-878d356c6d68'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-8y4xv3jg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T20:01:09Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNTc3ODU1ODY0NDAzODgzMTY3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA1Nzc4NTU4NjQ0MDM4ODMxNjc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDU3Nzg1NTg2NDQwMzg4MzE2Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA1Nzc4NTU4NjQ0MDM4ODMxNjc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNTc3ODU1ODY0NDAzODgzMTY3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNTc3ODU1ODY0NDAzODgzMTY3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Dec 10 20:08:59 compute-0 nova_compute[189279]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDU3Nzg1NTg2NDQwMzg4MzE2Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA1Nzc4NTU4NjQ0MDM4ODMxNjc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNTc3ODU1ODY0NDAzODgzMTY3PT0tLQo=',user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=26729739-a300-43fe-8678-5294ed41f6ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0785494f-981a-4c23-8e42-a15d0c582bfb", "address": "fa:16:3e:0b:ad:37", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0785494f-98", "ovs_interfaceid": "0785494f-981a-4c23-8e42-a15d0c582bfb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.520 189283 DEBUG nova.network.os_vif_util [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "0785494f-981a-4c23-8e42-a15d0c582bfb", "address": "fa:16:3e:0b:ad:37", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0785494f-98", "ovs_interfaceid": "0785494f-981a-4c23-8e42-a15d0c582bfb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.520 189283 DEBUG nova.network.os_vif_util [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:ad:37,bridge_name='br-int',has_traffic_filtering=True,id=0785494f-981a-4c23-8e42-a15d0c582bfb,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0785494f-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.521 189283 DEBUG os_vif [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:ad:37,bridge_name='br-int',has_traffic_filtering=True,id=0785494f-981a-4c23-8e42-a15d0c582bfb,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0785494f-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.523 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.523 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0785494f-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.528 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.534 189283 INFO os_vif [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:ad:37,bridge_name='br-int',has_traffic_filtering=True,id=0785494f-981a-4c23-8e42-a15d0c582bfb,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0785494f-98')
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.534 189283 INFO nova.virt.libvirt.driver [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Deleting instance files /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed_del
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.535 189283 INFO nova.virt.libvirt.driver [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Deletion of /var/lib/nova/instances/26729739-a300-43fe-8678-5294ed41f6ed_del complete
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.601 189283 INFO nova.compute.manager [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Took 0.38 seconds to destroy the instance on the hypervisor.
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.602 189283 DEBUG oslo.service.loopingcall [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.603 189283 DEBUG nova.compute.manager [-] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.603 189283 DEBUG nova.network.neutron [-] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.685 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:08:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:08:59.685 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.686 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:08:59 compute-0 rsyslogd[236537]: message too long (8192) with configured size 8096, begin of message is: 2025-12-10 20:08:59.519 189283 DEBUG nova.virt.libvirt.vif [None req-32e1ce7f-66 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 10 20:08:59 compute-0 podman[203484]: time="2025-12-10T20:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:08:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:08:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.779 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Updating instance_info_cache with network_info: [{"id": "0785494f-981a-4c23-8e42-a15d0c582bfb", "address": "fa:16:3e:0b:ad:37", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0785494f-98", "ovs_interfaceid": "0785494f-981a-4c23-8e42-a15d0c582bfb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.794 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.794 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.794 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.795 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.795 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.795 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.823 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.823 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.823 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.823 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.908 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:08:59 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.997 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:08:59.999 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.097 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.098 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.162 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.163 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.259 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.593 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.594 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5100MB free_disk=72.34709930419922GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.594 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.595 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.682 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 12986b74-7b15-4ff4-9019-081950660d4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.683 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 26729739-a300-43fe-8678-5294ed41f6ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.683 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.683 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.738 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.752 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.775 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.776 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.823 189283 DEBUG nova.compute.manager [req-bf84efe3-221a-44d9-9212-9c587859dc9a req-336d0b95-89a4-4675-bb29-89b9e55df727 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Received event network-changed-0785494f-981a-4c23-8e42-a15d0c582bfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.824 189283 DEBUG nova.compute.manager [req-bf84efe3-221a-44d9-9212-9c587859dc9a req-336d0b95-89a4-4675-bb29-89b9e55df727 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Refreshing instance network info cache due to event network-changed-0785494f-981a-4c23-8e42-a15d0c582bfb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.824 189283 DEBUG oslo_concurrency.lockutils [req-bf84efe3-221a-44d9-9212-9c587859dc9a req-336d0b95-89a4-4675-bb29-89b9e55df727 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.824 189283 DEBUG oslo_concurrency.lockutils [req-bf84efe3-221a-44d9-9212-9c587859dc9a req-336d0b95-89a4-4675-bb29-89b9e55df727 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:09:00 compute-0 nova_compute[189279]: 2025-12-10 20:09:00.825 189283 DEBUG nova.network.neutron [req-bf84efe3-221a-44d9-9212-9c587859dc9a req-336d0b95-89a4-4675-bb29-89b9e55df727 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Refreshing network info cache for port 0785494f-981a-4c23-8e42-a15d0c582bfb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:09:01 compute-0 openstack_network_exporter[205632]: ERROR   20:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:09:01 compute-0 openstack_network_exporter[205632]: ERROR   20:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:09:01 compute-0 openstack_network_exporter[205632]: ERROR   20:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:09:01 compute-0 openstack_network_exporter[205632]: ERROR   20:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:09:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:09:01 compute-0 openstack_network_exporter[205632]: ERROR   20:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:09:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.468 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.545 189283 DEBUG nova.network.neutron [-] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.562 189283 INFO nova.compute.manager [-] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Took 1.96 seconds to deallocate network for instance.
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.601 189283 DEBUG oslo_concurrency.lockutils [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.602 189283 DEBUG oslo_concurrency.lockutils [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.604 189283 DEBUG nova.compute.manager [req-e9d634cd-0587-4dea-8877-9da58c510ec8 req-5f6cfb5e-eb26-4a0a-b860-6391dd84887e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Received event network-vif-plugged-0785494f-981a-4c23-8e42-a15d0c582bfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.605 189283 DEBUG oslo_concurrency.lockutils [req-e9d634cd-0587-4dea-8877-9da58c510ec8 req-5f6cfb5e-eb26-4a0a-b860-6391dd84887e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "26729739-a300-43fe-8678-5294ed41f6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.605 189283 DEBUG oslo_concurrency.lockutils [req-e9d634cd-0587-4dea-8877-9da58c510ec8 req-5f6cfb5e-eb26-4a0a-b860-6391dd84887e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.606 189283 DEBUG oslo_concurrency.lockutils [req-e9d634cd-0587-4dea-8877-9da58c510ec8 req-5f6cfb5e-eb26-4a0a-b860-6391dd84887e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.606 189283 DEBUG nova.compute.manager [req-e9d634cd-0587-4dea-8877-9da58c510ec8 req-5f6cfb5e-eb26-4a0a-b860-6391dd84887e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] No waiting events found dispatching network-vif-plugged-0785494f-981a-4c23-8e42-a15d0c582bfb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.606 189283 WARNING nova.compute.manager [req-e9d634cd-0587-4dea-8877-9da58c510ec8 req-5f6cfb5e-eb26-4a0a-b860-6391dd84887e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Received unexpected event network-vif-plugged-0785494f-981a-4c23-8e42-a15d0c582bfb for instance with vm_state active and task_state deleting.
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.683 189283 DEBUG nova.compute.provider_tree [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.696 189283 DEBUG nova.scheduler.client.report [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.714 189283 DEBUG oslo_concurrency.lockutils [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.741 189283 INFO nova.scheduler.client.report [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Deleted allocations for instance 26729739-a300-43fe-8678-5294ed41f6ed
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.812 189283 DEBUG oslo_concurrency.lockutils [None req-32e1ce7f-6629-4f6b-bae3-f6e18ac970da 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "26729739-a300-43fe-8678-5294ed41f6ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:09:01 compute-0 nova_compute[189279]: 2025-12-10 20:09:01.819 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:02 compute-0 nova_compute[189279]: 2025-12-10 20:09:02.188 189283 DEBUG nova.network.neutron [req-bf84efe3-221a-44d9-9212-9c587859dc9a req-336d0b95-89a4-4675-bb29-89b9e55df727 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Updated VIF entry in instance network info cache for port 0785494f-981a-4c23-8e42-a15d0c582bfb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:09:02 compute-0 nova_compute[189279]: 2025-12-10 20:09:02.189 189283 DEBUG nova.network.neutron [req-bf84efe3-221a-44d9-9212-9c587859dc9a req-336d0b95-89a4-4675-bb29-89b9e55df727 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Updating instance_info_cache with network_info: [{"id": "0785494f-981a-4c23-8e42-a15d0c582bfb", "address": "fa:16:3e:0b:ad:37", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0785494f-98", "ovs_interfaceid": "0785494f-981a-4c23-8e42-a15d0c582bfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:09:02 compute-0 nova_compute[189279]: 2025-12-10 20:09:02.206 189283 DEBUG oslo_concurrency.lockutils [req-bf84efe3-221a-44d9-9212-9c587859dc9a req-336d0b95-89a4-4675-bb29-89b9e55df727 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-26729739-a300-43fe-8678-5294ed41f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:09:03 compute-0 nova_compute[189279]: 2025-12-10 20:09:03.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:09:04 compute-0 nova_compute[189279]: 2025-12-10 20:09:04.525 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:05 compute-0 nova_compute[189279]: 2025-12-10 20:09:05.482 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:09:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:06.689 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:09:06 compute-0 nova_compute[189279]: 2025-12-10 20:09:06.821 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:08 compute-0 podman[247442]: 2025-12-10 20:09:08.107134711 +0000 UTC m=+0.072784921 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.buildah.version=1.29.0)
Dec 10 20:09:08 compute-0 podman[247440]: 2025-12-10 20:09:08.116064121 +0000 UTC m=+0.093613842 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 10 20:09:08 compute-0 podman[247441]: 2025-12-10 20:09:08.140000726 +0000 UTC m=+0.113769325 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 10 20:09:09 compute-0 nova_compute[189279]: 2025-12-10 20:09:09.527 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:11 compute-0 nova_compute[189279]: 2025-12-10 20:09:11.824 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:14 compute-0 podman[247500]: 2025-12-10 20:09:14.107121842 +0000 UTC m=+0.075253787 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:09:14 compute-0 podman[247499]: 2025-12-10 20:09:14.125510507 +0000 UTC m=+0.105381428 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:09:14 compute-0 nova_compute[189279]: 2025-12-10 20:09:14.504 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765397339.5030632, 26729739-a300-43fe-8678-5294ed41f6ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:09:14 compute-0 nova_compute[189279]: 2025-12-10 20:09:14.505 189283 INFO nova.compute.manager [-] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] VM Stopped (Lifecycle Event)
Dec 10 20:09:14 compute-0 nova_compute[189279]: 2025-12-10 20:09:14.521 189283 DEBUG nova.compute.manager [None req-70bb1ab2-724a-4f37-a842-357920cd04cd - - - - - -] [instance: 26729739-a300-43fe-8678-5294ed41f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:09:14 compute-0 nova_compute[189279]: 2025-12-10 20:09:14.529 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:16 compute-0 nova_compute[189279]: 2025-12-10 20:09:16.826 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:16 compute-0 podman[247539]: 2025-12-10 20:09:16.972924865 +0000 UTC m=+0.111579606 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.610 189283 DEBUG oslo_concurrency.lockutils [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "12986b74-7b15-4ff4-9019-081950660d4b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.611 189283 DEBUG oslo_concurrency.lockutils [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.612 189283 DEBUG oslo_concurrency.lockutils [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "12986b74-7b15-4ff4-9019-081950660d4b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.612 189283 DEBUG oslo_concurrency.lockutils [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.612 189283 DEBUG oslo_concurrency.lockutils [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.613 189283 INFO nova.compute.manager [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Terminating instance
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.614 189283 DEBUG nova.compute.manager [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:09:17 compute-0 kernel: tap20b76af1-42 (unregistering): left promiscuous mode
Dec 10 20:09:17 compute-0 NetworkManager[56238]: <info>  [1765397357.6494] device (tap20b76af1-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:09:17 compute-0 ovn_controller[97701]: 2025-12-10T20:09:17Z|00077|binding|INFO|Releasing lport 20b76af1-42c6-4b7d-a834-c20e017b3e8d from this chassis (sb_readonly=0)
Dec 10 20:09:17 compute-0 ovn_controller[97701]: 2025-12-10T20:09:17Z|00078|binding|INFO|Setting lport 20b76af1-42c6-4b7d-a834-c20e017b3e8d down in Southbound
Dec 10 20:09:17 compute-0 ovn_controller[97701]: 2025-12-10T20:09:17Z|00079|binding|INFO|Removing iface tap20b76af1-42 ovn-installed in OVS
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.663 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:17.668 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:2e:35 192.168.0.139'], port_security=['fa:16:3e:96:2e:35 192.168.0.139'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.139/24', 'neutron:device_id': '12986b74-7b15-4ff4-9019-081950660d4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe518ea62a94467e823b2b1046c57a2e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbfca0ae-785e-46d9-973d-467188fc7f6f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a76816-4d6e-46b1-bfa0-919a54b6e056, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=20b76af1-42c6-4b7d-a834-c20e017b3e8d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:09:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:17.671 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 20b76af1-42c6-4b7d-a834-c20e017b3e8d in datapath e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 unbound from our chassis
Dec 10 20:09:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:17.673 106564 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e55a1ff5-f742-4bad-ae9c-2f6d4795fa29, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 10 20:09:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:17.675 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[f1900943-93af-4b36-a36f-917aa56639a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:09:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:17.676 106564 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 namespace which is not needed anymore
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.683 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:17 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec 10 20:09:17 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 2min 28.716s CPU time.
Dec 10 20:09:17 compute-0 systemd-machined[155642]: Machine qemu-1-instance-00000001 terminated.
Dec 10 20:09:17 compute-0 neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29[239556]: [NOTICE]   (239560) : haproxy version is 2.8.14-c23fe91
Dec 10 20:09:17 compute-0 neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29[239556]: [NOTICE]   (239560) : path to executable is /usr/sbin/haproxy
Dec 10 20:09:17 compute-0 neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29[239556]: [WARNING]  (239560) : Exiting Master process...
Dec 10 20:09:17 compute-0 neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29[239556]: [WARNING]  (239560) : Exiting Master process...
Dec 10 20:09:17 compute-0 neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29[239556]: [ALERT]    (239560) : Current worker (239562) exited with code 143 (Terminated)
Dec 10 20:09:17 compute-0 neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29[239556]: [WARNING]  (239560) : All workers exited. Exiting... (0)
Dec 10 20:09:17 compute-0 systemd[1]: libpod-429887737e95730050e6273086338cd64918584fc0bd030f9cfd323ec88f0102.scope: Deactivated successfully.
Dec 10 20:09:17 compute-0 podman[247591]: 2025-12-10 20:09:17.862963659 +0000 UTC m=+0.067030426 container died 429887737e95730050e6273086338cd64918584fc0bd030f9cfd323ec88f0102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:09:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-429887737e95730050e6273086338cd64918584fc0bd030f9cfd323ec88f0102-userdata-shm.mount: Deactivated successfully.
Dec 10 20:09:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfc3fc5e82d551ae3a43d490a5d3025ef36a223c071f0909dfb60c0b008a606f-merged.mount: Deactivated successfully.
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.909 189283 INFO nova.virt.libvirt.driver [-] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Instance destroyed successfully.
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.910 189283 DEBUG nova.objects.instance [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lazy-loading 'resources' on Instance uuid 12986b74-7b15-4ff4-9019-081950660d4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:09:17 compute-0 podman[247591]: 2025-12-10 20:09:17.911989909 +0000 UTC m=+0.116056676 container cleanup 429887737e95730050e6273086338cd64918584fc0bd030f9cfd323ec88f0102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 10 20:09:17 compute-0 systemd[1]: libpod-conmon-429887737e95730050e6273086338cd64918584fc0bd030f9cfd323ec88f0102.scope: Deactivated successfully.
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.923 189283 DEBUG nova.virt.libvirt.vif [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T19:53:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-10T19:53:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fe518ea62a94467e823b2b1046c57a2e',ramdisk_id='',reservation_id='r-0icu4z6t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='06e6231d-0a77-4b09-acb3-e7faf5a777be',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T19:53:14Z,user_data=None,user_id='2143e69e49fd49db99c8737c973c1ea5',uuid=12986b74-7b15-4ff4-9019-081950660d4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.924 189283 DEBUG nova.network.os_vif_util [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converting VIF {"id": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "address": "fa:16:3e:96:2e:35", "network": {"id": "e55a1ff5-f742-4bad-ae9c-2f6d4795fa29", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.139", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe518ea62a94467e823b2b1046c57a2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b76af1-42", "ovs_interfaceid": "20b76af1-42c6-4b7d-a834-c20e017b3e8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.924 189283 DEBUG nova.network.os_vif_util [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:2e:35,bridge_name='br-int',has_traffic_filtering=True,id=20b76af1-42c6-4b7d-a834-c20e017b3e8d,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20b76af1-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.925 189283 DEBUG os_vif [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:2e:35,bridge_name='br-int',has_traffic_filtering=True,id=20b76af1-42c6-4b7d-a834-c20e017b3e8d,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20b76af1-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.926 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.927 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20b76af1-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.931 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.934 189283 INFO os_vif [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:2e:35,bridge_name='br-int',has_traffic_filtering=True,id=20b76af1-42c6-4b7d-a834-c20e017b3e8d,network=Network(e55a1ff5-f742-4bad-ae9c-2f6d4795fa29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20b76af1-42')
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.935 189283 INFO nova.virt.libvirt.driver [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Deleting instance files /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b_del
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.936 189283 INFO nova.virt.libvirt.driver [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Deletion of /var/lib/nova/instances/12986b74-7b15-4ff4-9019-081950660d4b_del complete
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.989 189283 INFO nova.compute.manager [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Took 0.37 seconds to destroy the instance on the hypervisor.
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.990 189283 DEBUG oslo.service.loopingcall [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.990 189283 DEBUG nova.compute.manager [-] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:09:17 compute-0 nova_compute[189279]: 2025-12-10 20:09:17.991 189283 DEBUG nova.network.neutron [-] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:09:18 compute-0 podman[247640]: 2025-12-10 20:09:18.014668844 +0000 UTC m=+0.072920414 container remove 429887737e95730050e6273086338cd64918584fc0bd030f9cfd323ec88f0102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 10 20:09:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:18.023 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5c1972-47d6-4475-9d47-075244678d4a]: (4, ('Wed Dec 10 08:09:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 (429887737e95730050e6273086338cd64918584fc0bd030f9cfd323ec88f0102)\n429887737e95730050e6273086338cd64918584fc0bd030f9cfd323ec88f0102\nWed Dec 10 08:09:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 (429887737e95730050e6273086338cd64918584fc0bd030f9cfd323ec88f0102)\n429887737e95730050e6273086338cd64918584fc0bd030f9cfd323ec88f0102\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:09:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:18.025 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[02068855-fad9-40b4-966e-6e029a30884c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:09:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:18.027 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape55a1ff5-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:09:18 compute-0 nova_compute[189279]: 2025-12-10 20:09:18.029 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:18 compute-0 kernel: tape55a1ff5-f0: left promiscuous mode
Dec 10 20:09:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:18.036 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[cc68da3e-8e50-404d-b34f-e26f576860d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:09:18 compute-0 nova_compute[189279]: 2025-12-10 20:09:18.045 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:18.059 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[49e26b88-3e76-4e04-ac02-30ac8b935018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:09:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:18.061 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[a475ec25-a286-42b2-9b1a-8a513b36df63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:09:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:18.077 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ecb4b0-e1f4-4055-924c-d6c86c45aa48]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 372618, 'reachable_time': 33968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247654, 'error': None, 'target': 'ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:09:18 compute-0 systemd[1]: run-netns-ovnmeta\x2de55a1ff5\x2df742\x2d4bad\x2dae9c\x2d2f6d4795fa29.mount: Deactivated successfully.
Dec 10 20:09:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:18.098 106676 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e55a1ff5-f742-4bad-ae9c-2f6d4795fa29 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 10 20:09:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:18.099 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[a41472f0-4ca0-484d-aceb-bde9dfe482a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:09:18 compute-0 nova_compute[189279]: 2025-12-10 20:09:18.708 189283 DEBUG nova.compute.manager [req-399a396b-d431-459e-97d0-b68125332897 req-bac31107-985e-4101-93ec-9fe93bf540cd 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Received event network-vif-unplugged-20b76af1-42c6-4b7d-a834-c20e017b3e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:09:18 compute-0 nova_compute[189279]: 2025-12-10 20:09:18.709 189283 DEBUG oslo_concurrency.lockutils [req-399a396b-d431-459e-97d0-b68125332897 req-bac31107-985e-4101-93ec-9fe93bf540cd 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "12986b74-7b15-4ff4-9019-081950660d4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:09:18 compute-0 nova_compute[189279]: 2025-12-10 20:09:18.709 189283 DEBUG oslo_concurrency.lockutils [req-399a396b-d431-459e-97d0-b68125332897 req-bac31107-985e-4101-93ec-9fe93bf540cd 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:09:18 compute-0 nova_compute[189279]: 2025-12-10 20:09:18.709 189283 DEBUG oslo_concurrency.lockutils [req-399a396b-d431-459e-97d0-b68125332897 req-bac31107-985e-4101-93ec-9fe93bf540cd 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:09:18 compute-0 nova_compute[189279]: 2025-12-10 20:09:18.710 189283 DEBUG nova.compute.manager [req-399a396b-d431-459e-97d0-b68125332897 req-bac31107-985e-4101-93ec-9fe93bf540cd 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] No waiting events found dispatching network-vif-unplugged-20b76af1-42c6-4b7d-a834-c20e017b3e8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:09:18 compute-0 nova_compute[189279]: 2025-12-10 20:09:18.710 189283 DEBUG nova.compute.manager [req-399a396b-d431-459e-97d0-b68125332897 req-bac31107-985e-4101-93ec-9fe93bf540cd 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Received event network-vif-unplugged-20b76af1-42c6-4b7d-a834-c20e017b3e8d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.247 189283 DEBUG nova.network.neutron [-] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.272 189283 INFO nova.compute.manager [-] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Took 2.28 seconds to deallocate network for instance.
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.311 189283 DEBUG oslo_concurrency.lockutils [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.311 189283 DEBUG oslo_concurrency.lockutils [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.343 189283 DEBUG nova.scheduler.client.report [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Refreshing inventories for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.362 189283 DEBUG nova.scheduler.client.report [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Updating ProviderTree inventory for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.362 189283 DEBUG nova.compute.provider_tree [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Updating inventory in ProviderTree for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.380 189283 DEBUG nova.scheduler.client.report [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Refreshing aggregate associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.401 189283 DEBUG nova.scheduler.client.report [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Refreshing trait associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, traits: COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.464 189283 DEBUG nova.compute.provider_tree [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.484 189283 DEBUG nova.scheduler.client.report [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.505 189283 DEBUG oslo_concurrency.lockutils [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.545 189283 INFO nova.scheduler.client.report [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Deleted allocations for instance 12986b74-7b15-4ff4-9019-081950660d4b
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.627 189283 DEBUG oslo_concurrency.lockutils [None req-c2ab7b7a-5a03-458a-9350-323d1ed8123b 2143e69e49fd49db99c8737c973c1ea5 fe518ea62a94467e823b2b1046c57a2e - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.788 189283 DEBUG nova.compute.manager [req-2bc361da-d2b0-4e54-80ad-edea974a9cc3 req-b188ba9c-85af-4d1a-92c3-7958f51157bf 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Received event network-vif-plugged-20b76af1-42c6-4b7d-a834-c20e017b3e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.788 189283 DEBUG oslo_concurrency.lockutils [req-2bc361da-d2b0-4e54-80ad-edea974a9cc3 req-b188ba9c-85af-4d1a-92c3-7958f51157bf 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "12986b74-7b15-4ff4-9019-081950660d4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.789 189283 DEBUG oslo_concurrency.lockutils [req-2bc361da-d2b0-4e54-80ad-edea974a9cc3 req-b188ba9c-85af-4d1a-92c3-7958f51157bf 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.789 189283 DEBUG oslo_concurrency.lockutils [req-2bc361da-d2b0-4e54-80ad-edea974a9cc3 req-b188ba9c-85af-4d1a-92c3-7958f51157bf 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "12986b74-7b15-4ff4-9019-081950660d4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.789 189283 DEBUG nova.compute.manager [req-2bc361da-d2b0-4e54-80ad-edea974a9cc3 req-b188ba9c-85af-4d1a-92c3-7958f51157bf 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] No waiting events found dispatching network-vif-plugged-20b76af1-42c6-4b7d-a834-c20e017b3e8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.789 189283 WARNING nova.compute.manager [req-2bc361da-d2b0-4e54-80ad-edea974a9cc3 req-b188ba9c-85af-4d1a-92c3-7958f51157bf 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Received unexpected event network-vif-plugged-20b76af1-42c6-4b7d-a834-c20e017b3e8d for instance with vm_state deleted and task_state None.
Dec 10 20:09:20 compute-0 nova_compute[189279]: 2025-12-10 20:09:20.790 189283 DEBUG nova.compute.manager [req-2bc361da-d2b0-4e54-80ad-edea974a9cc3 req-b188ba9c-85af-4d1a-92c3-7958f51157bf 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Received event network-vif-deleted-20b76af1-42c6-4b7d-a834-c20e017b3e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:09:21 compute-0 nova_compute[189279]: 2025-12-10 20:09:21.828 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:22 compute-0 podman[247657]: 2025-12-10 20:09:22.127503074 +0000 UTC m=+0.101007601 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 10 20:09:22 compute-0 nova_compute[189279]: 2025-12-10 20:09:22.931 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:23.389 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:09:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:23.389 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:09:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:09:23.389 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:09:26 compute-0 nova_compute[189279]: 2025-12-10 20:09:26.832 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:27 compute-0 nova_compute[189279]: 2025-12-10 20:09:27.936 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:29 compute-0 podman[203484]: time="2025-12-10T20:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:09:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 20:09:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4348 "" "Go-http-client/1.1"
Dec 10 20:09:30 compute-0 podman[247677]: 2025-12-10 20:09:30.0938085 +0000 UTC m=+0.075913826 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9)
Dec 10 20:09:30 compute-0 podman[247676]: 2025-12-10 20:09:30.137539117 +0000 UTC m=+0.118588104 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 20:09:31 compute-0 openstack_network_exporter[205632]: ERROR   20:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:09:31 compute-0 openstack_network_exporter[205632]: ERROR   20:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:09:31 compute-0 openstack_network_exporter[205632]: ERROR   20:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:09:31 compute-0 openstack_network_exporter[205632]: ERROR   20:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:09:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:09:31 compute-0 openstack_network_exporter[205632]: ERROR   20:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:09:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:09:31 compute-0 nova_compute[189279]: 2025-12-10 20:09:31.833 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:32 compute-0 nova_compute[189279]: 2025-12-10 20:09:32.908 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765397357.9059107, 12986b74-7b15-4ff4-9019-081950660d4b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:09:32 compute-0 nova_compute[189279]: 2025-12-10 20:09:32.908 189283 INFO nova.compute.manager [-] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] VM Stopped (Lifecycle Event)
Dec 10 20:09:32 compute-0 nova_compute[189279]: 2025-12-10 20:09:32.929 189283 DEBUG nova.compute.manager [None req-6df6c78f-bfea-4fa6-a66e-7416241b0da6 - - - - - -] [instance: 12986b74-7b15-4ff4-9019-081950660d4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:09:32 compute-0 nova_compute[189279]: 2025-12-10 20:09:32.940 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:36 compute-0 nova_compute[189279]: 2025-12-10 20:09:36.835 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:37 compute-0 nova_compute[189279]: 2025-12-10 20:09:37.944 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:39 compute-0 podman[247719]: 2025-12-10 20:09:39.084122637 +0000 UTC m=+0.064412785 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 10 20:09:39 compute-0 podman[247721]: 2025-12-10 20:09:39.118916593 +0000 UTC m=+0.076711565 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30, distribution-scope=public, release=1214.1726694543, io.openshift.tags=base rhel9, architecture=x86_64, container_name=kepler, version=9.4, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0)
Dec 10 20:09:39 compute-0 podman[247720]: 2025-12-10 20:09:39.119749796 +0000 UTC m=+0.096965011 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:09:41 compute-0 nova_compute[189279]: 2025-12-10 20:09:41.837 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.179 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.179 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.180 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.183 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.185 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.185 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.185 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.185 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.188 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.192 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.193 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.193 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.194 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.194 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.195 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.195 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.196 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.196 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.197 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.198 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.198 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.199 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.200 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.200 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.201 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.201 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.202 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.202 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.203 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.203 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.204 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.204 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.204 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.204 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.205 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.205 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.208 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.208 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.208 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.208 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.208 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:09:42.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:09:42 compute-0 nova_compute[189279]: 2025-12-10 20:09:42.948 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:44 compute-0 podman[247775]: 2025-12-10 20:09:44.758083941 +0000 UTC m=+0.072402971 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:09:44 compute-0 podman[247774]: 2025-12-10 20:09:44.767743151 +0000 UTC m=+0.088415911 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 10 20:09:46 compute-0 nova_compute[189279]: 2025-12-10 20:09:46.842 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:47 compute-0 podman[247815]: 2025-12-10 20:09:47.175527152 +0000 UTC m=+0.148258504 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 10 20:09:47 compute-0 nova_compute[189279]: 2025-12-10 20:09:47.951 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:50 compute-0 ovn_controller[97701]: 2025-12-10T20:09:50Z|00080|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec 10 20:09:51 compute-0 nova_compute[189279]: 2025-12-10 20:09:51.843 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:52 compute-0 nova_compute[189279]: 2025-12-10 20:09:52.956 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:53 compute-0 podman[247842]: 2025-12-10 20:09:53.148418575 +0000 UTC m=+0.113275901 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251210, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 10 20:09:54 compute-0 nova_compute[189279]: 2025-12-10 20:09:54.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:09:56 compute-0 nova_compute[189279]: 2025-12-10 20:09:56.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:09:56 compute-0 nova_compute[189279]: 2025-12-10 20:09:56.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:09:56 compute-0 nova_compute[189279]: 2025-12-10 20:09:56.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 10 20:09:56 compute-0 nova_compute[189279]: 2025-12-10 20:09:56.506 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 10 20:09:56 compute-0 nova_compute[189279]: 2025-12-10 20:09:56.847 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:57 compute-0 nova_compute[189279]: 2025-12-10 20:09:57.961 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:09:58 compute-0 nova_compute[189279]: 2025-12-10 20:09:58.507 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:09:58 compute-0 nova_compute[189279]: 2025-12-10 20:09:58.508 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:09:58 compute-0 nova_compute[189279]: 2025-12-10 20:09:58.508 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:09:58 compute-0 nova_compute[189279]: 2025-12-10 20:09:58.509 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:09:58 compute-0 nova_compute[189279]: 2025-12-10 20:09:58.539 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:09:58 compute-0 nova_compute[189279]: 2025-12-10 20:09:58.539 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:09:58 compute-0 nova_compute[189279]: 2025-12-10 20:09:58.539 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:09:58 compute-0 nova_compute[189279]: 2025-12-10 20:09:58.540 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:09:59 compute-0 nova_compute[189279]: 2025-12-10 20:09:59.007 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:09:59 compute-0 nova_compute[189279]: 2025-12-10 20:09:59.008 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5381MB free_disk=72.36875915527344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:09:59 compute-0 nova_compute[189279]: 2025-12-10 20:09:59.008 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:09:59 compute-0 nova_compute[189279]: 2025-12-10 20:09:59.009 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:09:59 compute-0 nova_compute[189279]: 2025-12-10 20:09:59.173 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:09:59 compute-0 nova_compute[189279]: 2025-12-10 20:09:59.173 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:09:59 compute-0 nova_compute[189279]: 2025-12-10 20:09:59.260 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:09:59 compute-0 nova_compute[189279]: 2025-12-10 20:09:59.274 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:09:59 compute-0 nova_compute[189279]: 2025-12-10 20:09:59.383 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:09:59 compute-0 nova_compute[189279]: 2025-12-10 20:09:59.384 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:09:59 compute-0 podman[203484]: time="2025-12-10T20:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:09:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 20:09:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Dec 10 20:10:00 compute-0 nova_compute[189279]: 2025-12-10 20:10:00.364 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:10:00 compute-0 nova_compute[189279]: 2025-12-10 20:10:00.365 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:10:00 compute-0 nova_compute[189279]: 2025-12-10 20:10:00.365 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:10:00 compute-0 nova_compute[189279]: 2025-12-10 20:10:00.384 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 10 20:10:00 compute-0 nova_compute[189279]: 2025-12-10 20:10:00.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:10:01 compute-0 podman[247862]: 2025-12-10 20:10:01.162392175 +0000 UTC m=+0.132698324 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:10:01 compute-0 podman[247863]: 2025-12-10 20:10:01.167802771 +0000 UTC m=+0.122487088 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc.)
Dec 10 20:10:01 compute-0 openstack_network_exporter[205632]: ERROR   20:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:10:01 compute-0 openstack_network_exporter[205632]: ERROR   20:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:10:01 compute-0 openstack_network_exporter[205632]: ERROR   20:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:10:01 compute-0 openstack_network_exporter[205632]: ERROR   20:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:10:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:10:01 compute-0 openstack_network_exporter[205632]: ERROR   20:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:10:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:10:01 compute-0 nova_compute[189279]: 2025-12-10 20:10:01.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:10:01 compute-0 nova_compute[189279]: 2025-12-10 20:10:01.848 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:02 compute-0 nova_compute[189279]: 2025-12-10 20:10:02.965 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:04 compute-0 nova_compute[189279]: 2025-12-10 20:10:04.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:10:06 compute-0 nova_compute[189279]: 2025-12-10 20:10:06.852 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:07 compute-0 nova_compute[189279]: 2025-12-10 20:10:07.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:10:07 compute-0 nova_compute[189279]: 2025-12-10 20:10:07.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 10 20:10:07 compute-0 nova_compute[189279]: 2025-12-10 20:10:07.969 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:08 compute-0 nova_compute[189279]: 2025-12-10 20:10:08.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:10:10 compute-0 podman[247907]: 2025-12-10 20:10:10.13097676 +0000 UTC m=+0.094714082 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 10 20:10:10 compute-0 podman[247908]: 2025-12-10 20:10:10.141810941 +0000 UTC m=+0.102895021 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, container_name=kepler, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container)
Dec 10 20:10:10 compute-0 podman[247906]: 2025-12-10 20:10:10.161925073 +0000 UTC m=+0.118505782 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:10:11 compute-0 nova_compute[189279]: 2025-12-10 20:10:11.854 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:12 compute-0 nova_compute[189279]: 2025-12-10 20:10:12.973 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:15 compute-0 podman[247962]: 2025-12-10 20:10:15.089817688 +0000 UTC m=+0.069917284 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 20:10:15 compute-0 podman[247961]: 2025-12-10 20:10:15.094279389 +0000 UTC m=+0.074424116 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:10:16 compute-0 nova_compute[189279]: 2025-12-10 20:10:16.856 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:17 compute-0 nova_compute[189279]: 2025-12-10 20:10:17.975 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:18 compute-0 podman[248003]: 2025-12-10 20:10:18.148125225 +0000 UTC m=+0.121717608 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:10:21 compute-0 sshd-session[248030]: Invalid user solv from 80.94.92.184 port 57724
Dec 10 20:10:21 compute-0 sshd-session[248030]: Connection closed by invalid user solv 80.94.92.184 port 57724 [preauth]
Dec 10 20:10:21 compute-0 nova_compute[189279]: 2025-12-10 20:10:21.859 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:22 compute-0 nova_compute[189279]: 2025-12-10 20:10:22.978 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:10:23.391 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:10:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:10:23.391 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:10:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:10:23.392 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:10:24 compute-0 podman[248033]: 2025-12-10 20:10:24.099788536 +0000 UTC m=+0.083772676 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251210, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:10:26 compute-0 nova_compute[189279]: 2025-12-10 20:10:26.861 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:27 compute-0 nova_compute[189279]: 2025-12-10 20:10:27.982 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:29 compute-0 podman[203484]: time="2025-12-10T20:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:10:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 20:10:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4341 "" "Go-http-client/1.1"
Dec 10 20:10:31 compute-0 openstack_network_exporter[205632]: ERROR   20:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:10:31 compute-0 openstack_network_exporter[205632]: ERROR   20:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:10:31 compute-0 openstack_network_exporter[205632]: ERROR   20:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:10:31 compute-0 openstack_network_exporter[205632]: ERROR   20:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:10:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:10:31 compute-0 openstack_network_exporter[205632]: ERROR   20:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:10:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:10:31 compute-0 nova_compute[189279]: 2025-12-10 20:10:31.863 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:32 compute-0 podman[248053]: 2025-12-10 20:10:32.129842989 +0000 UTC m=+0.116011695 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 20:10:32 compute-0 podman[248054]: 2025-12-10 20:10:32.158361806 +0000 UTC m=+0.127151374 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 20:10:32 compute-0 nova_compute[189279]: 2025-12-10 20:10:32.986 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:36 compute-0 nova_compute[189279]: 2025-12-10 20:10:36.866 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:37 compute-0 nova_compute[189279]: 2025-12-10 20:10:37.990 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:41 compute-0 podman[248097]: 2025-12-10 20:10:41.087719034 +0000 UTC m=+0.068091444 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 10 20:10:41 compute-0 podman[248098]: 2025-12-10 20:10:41.116807687 +0000 UTC m=+0.086301624 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:10:41 compute-0 podman[248104]: 2025-12-10 20:10:41.137138695 +0000 UTC m=+0.088727780 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, container_name=kepler, vcs-type=git, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, architecture=x86_64, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, name=ubi9, com.redhat.component=ubi9-container, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 10 20:10:41 compute-0 nova_compute[189279]: 2025-12-10 20:10:41.870 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:42 compute-0 nova_compute[189279]: 2025-12-10 20:10:42.992 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:46 compute-0 podman[248154]: 2025-12-10 20:10:46.124954724 +0000 UTC m=+0.100816536 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 20:10:46 compute-0 podman[248153]: 2025-12-10 20:10:46.129108536 +0000 UTC m=+0.105772860 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 10 20:10:46 compute-0 nova_compute[189279]: 2025-12-10 20:10:46.872 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:47 compute-0 nova_compute[189279]: 2025-12-10 20:10:47.995 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:49 compute-0 podman[248195]: 2025-12-10 20:10:49.243317137 +0000 UTC m=+0.210123309 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 10 20:10:51 compute-0 nova_compute[189279]: 2025-12-10 20:10:51.874 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:53 compute-0 nova_compute[189279]: 2025-12-10 20:10:52.999 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:55 compute-0 podman[248222]: 2025-12-10 20:10:55.148400184 +0000 UTC m=+0.116137688 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 10 20:10:55 compute-0 nova_compute[189279]: 2025-12-10 20:10:55.503 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:10:56 compute-0 nova_compute[189279]: 2025-12-10 20:10:56.879 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:58 compute-0 nova_compute[189279]: 2025-12-10 20:10:58.006 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:10:58 compute-0 nova_compute[189279]: 2025-12-10 20:10:58.484 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:10:58 compute-0 nova_compute[189279]: 2025-12-10 20:10:58.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:10:59 compute-0 nova_compute[189279]: 2025-12-10 20:10:59.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:10:59 compute-0 nova_compute[189279]: 2025-12-10 20:10:59.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:10:59 compute-0 nova_compute[189279]: 2025-12-10 20:10:59.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:10:59 compute-0 nova_compute[189279]: 2025-12-10 20:10:59.501 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 10 20:10:59 compute-0 podman[203484]: time="2025-12-10T20:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:10:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 20:10:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Dec 10 20:11:00 compute-0 nova_compute[189279]: 2025-12-10 20:11:00.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:11:00 compute-0 nova_compute[189279]: 2025-12-10 20:11:00.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:11:00 compute-0 nova_compute[189279]: 2025-12-10 20:11:00.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:11:00 compute-0 nova_compute[189279]: 2025-12-10 20:11:00.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:11:00 compute-0 nova_compute[189279]: 2025-12-10 20:11:00.516 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:11:00 compute-0 nova_compute[189279]: 2025-12-10 20:11:00.516 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:11:00 compute-0 nova_compute[189279]: 2025-12-10 20:11:00.517 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:11:00 compute-0 nova_compute[189279]: 2025-12-10 20:11:00.517 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:11:00 compute-0 nova_compute[189279]: 2025-12-10 20:11:00.835 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:11:00 compute-0 nova_compute[189279]: 2025-12-10 20:11:00.836 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5377MB free_disk=72.36875915527344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:11:00 compute-0 nova_compute[189279]: 2025-12-10 20:11:00.836 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:11:00 compute-0 nova_compute[189279]: 2025-12-10 20:11:00.836 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:11:00 compute-0 nova_compute[189279]: 2025-12-10 20:11:00.914 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:11:00 compute-0 nova_compute[189279]: 2025-12-10 20:11:00.914 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:11:01 compute-0 nova_compute[189279]: 2025-12-10 20:11:01.002 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:11:01 compute-0 nova_compute[189279]: 2025-12-10 20:11:01.027 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:11:01 compute-0 nova_compute[189279]: 2025-12-10 20:11:01.031 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:11:01 compute-0 nova_compute[189279]: 2025-12-10 20:11:01.031 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:11:01 compute-0 openstack_network_exporter[205632]: ERROR   20:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:11:01 compute-0 openstack_network_exporter[205632]: ERROR   20:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:11:01 compute-0 openstack_network_exporter[205632]: ERROR   20:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:11:01 compute-0 openstack_network_exporter[205632]: ERROR   20:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:11:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:11:01 compute-0 openstack_network_exporter[205632]: ERROR   20:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:11:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:11:01 compute-0 nova_compute[189279]: 2025-12-10 20:11:01.882 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:03 compute-0 nova_compute[189279]: 2025-12-10 20:11:03.009 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:03 compute-0 podman[248241]: 2025-12-10 20:11:03.133661441 +0000 UTC m=+0.114350271 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 20:11:03 compute-0 podman[248242]: 2025-12-10 20:11:03.152233681 +0000 UTC m=+0.115881682 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9-minimal, vcs-type=git, io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350)
Dec 10 20:11:04 compute-0 nova_compute[189279]: 2025-12-10 20:11:04.033 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:11:05 compute-0 nova_compute[189279]: 2025-12-10 20:11:05.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:11:06 compute-0 nova_compute[189279]: 2025-12-10 20:11:06.885 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:08 compute-0 nova_compute[189279]: 2025-12-10 20:11:08.013 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:09 compute-0 nova_compute[189279]: 2025-12-10 20:11:09.484 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:11:11 compute-0 nova_compute[189279]: 2025-12-10 20:11:11.889 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:12 compute-0 podman[248284]: 2025-12-10 20:11:12.14925542 +0000 UTC m=+0.122710856 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 10 20:11:12 compute-0 podman[248286]: 2025-12-10 20:11:12.150430681 +0000 UTC m=+0.113770235 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release=1214.1726694543, release-0.7.12=, io.buildah.version=1.29.0, name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 10 20:11:12 compute-0 podman[248285]: 2025-12-10 20:11:12.157599714 +0000 UTC m=+0.128746157 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Dec 10 20:11:13 compute-0 nova_compute[189279]: 2025-12-10 20:11:13.017 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:16 compute-0 nova_compute[189279]: 2025-12-10 20:11:16.894 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:17 compute-0 podman[248336]: 2025-12-10 20:11:17.117986086 +0000 UTC m=+0.092317507 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 20:11:17 compute-0 podman[248335]: 2025-12-10 20:11:17.135894058 +0000 UTC m=+0.108445181 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 10 20:11:18 compute-0 nova_compute[189279]: 2025-12-10 20:11:18.022 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:20 compute-0 podman[248378]: 2025-12-10 20:11:20.153329383 +0000 UTC m=+0.135847678 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 10 20:11:21 compute-0 nova_compute[189279]: 2025-12-10 20:11:21.895 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:23 compute-0 nova_compute[189279]: 2025-12-10 20:11:23.027 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:11:23.392 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:11:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:11:23.393 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:11:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:11:23.394 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:11:26 compute-0 podman[248403]: 2025-12-10 20:11:26.177190029 +0000 UTC m=+0.146406433 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 10 20:11:26 compute-0 nova_compute[189279]: 2025-12-10 20:11:26.898 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:28 compute-0 nova_compute[189279]: 2025-12-10 20:11:28.031 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:29 compute-0 podman[203484]: time="2025-12-10T20:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:11:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 20:11:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Dec 10 20:11:31 compute-0 openstack_network_exporter[205632]: ERROR   20:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:11:31 compute-0 openstack_network_exporter[205632]: ERROR   20:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:11:31 compute-0 openstack_network_exporter[205632]: ERROR   20:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:11:31 compute-0 openstack_network_exporter[205632]: ERROR   20:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:11:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:11:31 compute-0 openstack_network_exporter[205632]: ERROR   20:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:11:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:11:31 compute-0 nova_compute[189279]: 2025-12-10 20:11:31.900 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:33 compute-0 nova_compute[189279]: 2025-12-10 20:11:33.037 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:34 compute-0 podman[248422]: 2025-12-10 20:11:34.120832395 +0000 UTC m=+0.092132422 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 20:11:34 compute-0 podman[248423]: 2025-12-10 20:11:34.13327551 +0000 UTC m=+0.093727675 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Dec 10 20:11:36 compute-0 nova_compute[189279]: 2025-12-10 20:11:36.903 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:38 compute-0 nova_compute[189279]: 2025-12-10 20:11:38.041 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:41 compute-0 nova_compute[189279]: 2025-12-10 20:11:41.906 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.180 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.181 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.181 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.182 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.185 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.185 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.185 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.185 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.186 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.188 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.188 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.189 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.190 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.191 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.191 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.192 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.195 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.196 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.196 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.197 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.197 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.197 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.198 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.198 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.199 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.199 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.199 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.200 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.200 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.200 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.201 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.201 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.202 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.202 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.202 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.203 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.203 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.203 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.203 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.204 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.204 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.204 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.204 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.204 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:11:42.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:11:43 compute-0 nova_compute[189279]: 2025-12-10 20:11:43.044 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:43 compute-0 podman[248469]: 2025-12-10 20:11:43.14702197 +0000 UTC m=+0.101242187 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:11:43 compute-0 podman[248468]: 2025-12-10 20:11:43.154509651 +0000 UTC m=+0.114719830 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:11:43 compute-0 podman[248470]: 2025-12-10 20:11:43.165151848 +0000 UTC m=+0.108283387 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release-0.7.12=, version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_id=edpm, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 20:11:46 compute-0 nova_compute[189279]: 2025-12-10 20:11:46.908 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:48 compute-0 nova_compute[189279]: 2025-12-10 20:11:48.050 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:48 compute-0 podman[248522]: 2025-12-10 20:11:48.121343056 +0000 UTC m=+0.085857013 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 20:11:48 compute-0 podman[248521]: 2025-12-10 20:11:48.126238637 +0000 UTC m=+0.104043272 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 10 20:11:51 compute-0 podman[248564]: 2025-12-10 20:11:51.16952743 +0000 UTC m=+0.148146920 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 10 20:11:51 compute-0 nova_compute[189279]: 2025-12-10 20:11:51.910 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:53 compute-0 nova_compute[189279]: 2025-12-10 20:11:53.054 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:55 compute-0 nova_compute[189279]: 2025-12-10 20:11:55.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:11:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:11:56.294 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:11:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:11:56.296 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 20:11:56 compute-0 nova_compute[189279]: 2025-12-10 20:11:56.298 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:56 compute-0 nova_compute[189279]: 2025-12-10 20:11:56.914 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:57 compute-0 podman[248588]: 2025-12-10 20:11:57.145267969 +0000 UTC m=+0.105600316 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, org.label-schema.schema-version=1.0)
Dec 10 20:11:57 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:11:57.298 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:11:58 compute-0 nova_compute[189279]: 2025-12-10 20:11:58.056 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:11:58 compute-0 nova_compute[189279]: 2025-12-10 20:11:58.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:11:59 compute-0 nova_compute[189279]: 2025-12-10 20:11:59.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:11:59 compute-0 nova_compute[189279]: 2025-12-10 20:11:59.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:11:59 compute-0 nova_compute[189279]: 2025-12-10 20:11:59.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:11:59 compute-0 nova_compute[189279]: 2025-12-10 20:11:59.507 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 10 20:11:59 compute-0 nova_compute[189279]: 2025-12-10 20:11:59.507 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:11:59 compute-0 podman[203484]: time="2025-12-10T20:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:11:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 20:11:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4342 "" "Go-http-client/1.1"
Dec 10 20:12:00 compute-0 nova_compute[189279]: 2025-12-10 20:12:00.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:12:00 compute-0 nova_compute[189279]: 2025-12-10 20:12:00.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:12:01 compute-0 openstack_network_exporter[205632]: ERROR   20:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:12:01 compute-0 openstack_network_exporter[205632]: ERROR   20:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:12:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:12:01 compute-0 openstack_network_exporter[205632]: ERROR   20:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:12:01 compute-0 openstack_network_exporter[205632]: ERROR   20:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:12:01 compute-0 openstack_network_exporter[205632]: ERROR   20:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:12:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:12:01 compute-0 nova_compute[189279]: 2025-12-10 20:12:01.916 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:02 compute-0 nova_compute[189279]: 2025-12-10 20:12:02.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:12:02 compute-0 nova_compute[189279]: 2025-12-10 20:12:02.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:12:02 compute-0 nova_compute[189279]: 2025-12-10 20:12:02.511 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:02 compute-0 nova_compute[189279]: 2025-12-10 20:12:02.512 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:02 compute-0 nova_compute[189279]: 2025-12-10 20:12:02.513 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:02 compute-0 nova_compute[189279]: 2025-12-10 20:12:02.513 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:12:02 compute-0 nova_compute[189279]: 2025-12-10 20:12:02.875 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:12:02 compute-0 nova_compute[189279]: 2025-12-10 20:12:02.878 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5377MB free_disk=72.36875915527344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:12:02 compute-0 nova_compute[189279]: 2025-12-10 20:12:02.879 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:02 compute-0 nova_compute[189279]: 2025-12-10 20:12:02.880 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:02 compute-0 nova_compute[189279]: 2025-12-10 20:12:02.957 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:12:02 compute-0 nova_compute[189279]: 2025-12-10 20:12:02.959 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:12:02 compute-0 nova_compute[189279]: 2025-12-10 20:12:02.991 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:12:03 compute-0 nova_compute[189279]: 2025-12-10 20:12:03.013 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:12:03 compute-0 nova_compute[189279]: 2025-12-10 20:12:03.015 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:12:03 compute-0 nova_compute[189279]: 2025-12-10 20:12:03.016 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:03 compute-0 nova_compute[189279]: 2025-12-10 20:12:03.059 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:04 compute-0 nova_compute[189279]: 2025-12-10 20:12:04.016 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:12:05 compute-0 podman[248606]: 2025-12-10 20:12:05.930936208 +0000 UTC m=+0.068155742 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:12:05 compute-0 podman[248607]: 2025-12-10 20:12:05.952919593 +0000 UTC m=+0.086627033 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=)
Dec 10 20:12:06 compute-0 nova_compute[189279]: 2025-12-10 20:12:06.918 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:07 compute-0 nova_compute[189279]: 2025-12-10 20:12:07.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:12:08 compute-0 nova_compute[189279]: 2025-12-10 20:12:08.062 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:11 compute-0 nova_compute[189279]: 2025-12-10 20:12:11.921 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:13 compute-0 nova_compute[189279]: 2025-12-10 20:12:13.065 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:14 compute-0 podman[248651]: 2025-12-10 20:12:14.119106578 +0000 UTC m=+0.099729037 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:12:14 compute-0 podman[248652]: 2025-12-10 20:12:14.13697214 +0000 UTC m=+0.099505400 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 10 20:12:14 compute-0 podman[248653]: 2025-12-10 20:12:14.162945123 +0000 UTC m=+0.135835833 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, vcs-type=git)
Dec 10 20:12:16 compute-0 nova_compute[189279]: 2025-12-10 20:12:16.923 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:18 compute-0 nova_compute[189279]: 2025-12-10 20:12:18.067 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:19 compute-0 podman[248706]: 2025-12-10 20:12:19.109402921 +0000 UTC m=+0.091038862 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 10 20:12:19 compute-0 podman[248707]: 2025-12-10 20:12:19.128783084 +0000 UTC m=+0.097264529 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:12:21 compute-0 nova_compute[189279]: 2025-12-10 20:12:21.926 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:22 compute-0 podman[248748]: 2025-12-10 20:12:22.199841283 +0000 UTC m=+0.169009229 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 10 20:12:23 compute-0 nova_compute[189279]: 2025-12-10 20:12:23.070 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:23.394 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:23.394 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:23.395 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:26 compute-0 nova_compute[189279]: 2025-12-10 20:12:26.928 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:28 compute-0 nova_compute[189279]: 2025-12-10 20:12:28.073 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:28 compute-0 podman[248774]: 2025-12-10 20:12:28.129506248 +0000 UTC m=+0.100138308 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 10 20:12:28 compute-0 nova_compute[189279]: 2025-12-10 20:12:28.768 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:29 compute-0 podman[203484]: time="2025-12-10T20:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:12:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 20:12:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4340 "" "Go-http-client/1.1"
Dec 10 20:12:30 compute-0 nova_compute[189279]: 2025-12-10 20:12:30.151 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:31 compute-0 nova_compute[189279]: 2025-12-10 20:12:31.044 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:31 compute-0 openstack_network_exporter[205632]: ERROR   20:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:12:31 compute-0 openstack_network_exporter[205632]: ERROR   20:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:12:31 compute-0 openstack_network_exporter[205632]: ERROR   20:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:12:31 compute-0 openstack_network_exporter[205632]: ERROR   20:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:12:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:12:31 compute-0 openstack_network_exporter[205632]: ERROR   20:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:12:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:12:31 compute-0 nova_compute[189279]: 2025-12-10 20:12:31.593 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:31 compute-0 nova_compute[189279]: 2025-12-10 20:12:31.930 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:32 compute-0 nova_compute[189279]: 2025-12-10 20:12:32.389 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:33 compute-0 nova_compute[189279]: 2025-12-10 20:12:33.076 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:36 compute-0 podman[248793]: 2025-12-10 20:12:36.15781378 +0000 UTC m=+0.138445663 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 20:12:36 compute-0 podman[248794]: 2025-12-10 20:12:36.165929 +0000 UTC m=+0.128625808 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc.)
Dec 10 20:12:36 compute-0 nova_compute[189279]: 2025-12-10 20:12:36.933 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:38 compute-0 nova_compute[189279]: 2025-12-10 20:12:38.080 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:38 compute-0 nova_compute[189279]: 2025-12-10 20:12:38.387 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:38 compute-0 nova_compute[189279]: 2025-12-10 20:12:38.664 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:38 compute-0 nova_compute[189279]: 2025-12-10 20:12:38.993 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:40 compute-0 nova_compute[189279]: 2025-12-10 20:12:40.070 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:40 compute-0 nova_compute[189279]: 2025-12-10 20:12:40.214 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:41 compute-0 nova_compute[189279]: 2025-12-10 20:12:41.936 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:43 compute-0 nova_compute[189279]: 2025-12-10 20:12:43.085 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:44 compute-0 podman[248839]: 2025-12-10 20:12:44.800859295 +0000 UTC m=+0.084619688 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, version=9.4, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, com.redhat.component=ubi9-container)
Dec 10 20:12:44 compute-0 podman[248838]: 2025-12-10 20:12:44.815420669 +0000 UTC m=+0.104152106 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 10 20:12:44 compute-0 podman[248837]: 2025-12-10 20:12:44.823278191 +0000 UTC m=+0.114954188 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 10 20:12:46 compute-0 nova_compute[189279]: 2025-12-10 20:12:46.940 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:48 compute-0 nova_compute[189279]: 2025-12-10 20:12:48.089 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:48 compute-0 nova_compute[189279]: 2025-12-10 20:12:48.995 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:49 compute-0 nova_compute[189279]: 2025-12-10 20:12:49.946 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquiring lock "63639261-d8d9-46e1-8b3f-55af36a85e58" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:49 compute-0 nova_compute[189279]: 2025-12-10 20:12:49.947 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:49 compute-0 nova_compute[189279]: 2025-12-10 20:12:49.975 189283 DEBUG nova.compute.manager [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 20:12:49 compute-0 nova_compute[189279]: 2025-12-10 20:12:49.997 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Acquiring lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:49 compute-0 nova_compute[189279]: 2025-12-10 20:12:49.998 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.037 189283 DEBUG nova.compute.manager [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 20:12:50 compute-0 podman[248891]: 2025-12-10 20:12:50.116173824 +0000 UTC m=+0.086118269 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:12:50 compute-0 podman[248892]: 2025-12-10 20:12:50.128939738 +0000 UTC m=+0.103384585 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.175 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.176 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.185 189283 DEBUG nova.virt.hardware [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.186 189283 INFO nova.compute.claims [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Claim successful on node compute-0.ctlplane.example.com
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.193 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.321 189283 DEBUG nova.compute.provider_tree [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.346 189283 DEBUG nova.scheduler.client.report [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.378 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.379 189283 DEBUG nova.compute.manager [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.382 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.391 189283 DEBUG nova.virt.hardware [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.392 189283 INFO nova.compute.claims [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Claim successful on node compute-0.ctlplane.example.com
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.458 189283 DEBUG nova.compute.manager [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.459 189283 DEBUG nova.network.neutron [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.485 189283 INFO nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.501 189283 DEBUG nova.compute.manager [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.562 189283 DEBUG nova.compute.provider_tree [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.602 189283 DEBUG nova.scheduler.client.report [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.625 189283 DEBUG nova.compute.manager [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.626 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.627 189283 INFO nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Creating image(s)
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.627 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquiring lock "/var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.628 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "/var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.629 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "/var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.629 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquiring lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.630 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.637 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.638 189283 DEBUG nova.compute.manager [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.695 189283 DEBUG nova.compute.manager [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.695 189283 DEBUG nova.network.neutron [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.712 189283 INFO nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.728 189283 DEBUG nova.compute.manager [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.843 189283 DEBUG nova.compute.manager [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.846 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.847 189283 INFO nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Creating image(s)
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.848 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Acquiring lock "/var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.849 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "/var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.851 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "/var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.852 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Acquiring lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:50 compute-0 nova_compute[189279]: 2025-12-10 20:12:50.950 189283 DEBUG nova.policy [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0c9cd4059c654dd4947e252e9f3acf85', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2e63db29894648c7a06ef3bcb4b98768', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 10 20:12:51 compute-0 nova_compute[189279]: 2025-12-10 20:12:51.050 189283 DEBUG nova.policy [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb4b85bd92294252be8009eb039aa323', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '713d58cceef640c38aa99b2cb5aafd50', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 10 20:12:51 compute-0 nova_compute[189279]: 2025-12-10 20:12:51.964 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.312 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.384 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquiring lock "81f60881-4334-4ede-a10d-454a7e8a4154" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.385 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "81f60881-4334-4ede-a10d-454a7e8a4154" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.391 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905.part --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.392 189283 DEBUG nova.virt.images [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] 33b11153-486b-4d32-bc63-6b6a6ed0b704 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.393 189283 DEBUG nova.privsep.utils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.394 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905.part /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.422 189283 DEBUG nova.compute.manager [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.523 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.524 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.534 189283 DEBUG nova.virt.hardware [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.534 189283 INFO nova.compute.claims [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Claim successful on node compute-0.ctlplane.example.com
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.546 189283 DEBUG nova.network.neutron [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Successfully created port: fd5af3d6-f054-4886-9ca7-2888772def6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.633 189283 DEBUG nova.network.neutron [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Successfully created port: a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.685 189283 DEBUG nova.compute.provider_tree [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.732 189283 DEBUG nova.scheduler.client.report [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.852 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.853 189283 DEBUG nova.compute.manager [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.911 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905.part /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905.converted" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.912 189283 DEBUG nova.compute.manager [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.912 189283 DEBUG nova.network.neutron [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.917 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.948 189283 INFO nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 20:12:52 compute-0 nova_compute[189279]: 2025-12-10 20:12:52.965 189283 DEBUG nova.compute.manager [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.019 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905.converted --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.020 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.033 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 2.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.033 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.045 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.063 189283 DEBUG oslo_concurrency.processutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.096 189283 DEBUG nova.compute.manager [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.098 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.098 189283 INFO nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Creating image(s)
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.099 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquiring lock "/var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.100 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "/var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.101 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "/var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.112 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.114 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.115 189283 DEBUG oslo_concurrency.processutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.135 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquiring lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.136 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.147 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.169 189283 DEBUG oslo_concurrency.processutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.171 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Acquiring lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.179 189283 DEBUG oslo_concurrency.processutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.180 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquiring lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:53 compute-0 podman[248943]: 2025-12-10 20:12:53.187481479 +0000 UTC m=+0.169243026 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.215 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.216 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.261 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.263 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.263 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.290 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.302 189283 DEBUG oslo_concurrency.processutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.352 189283 DEBUG nova.policy [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9901235a2b1b4cf4b7a0d6fd53dd0396', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2505343710a74a61bea5fcb849a4b61b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.355 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.356 189283 DEBUG nova.virt.disk.api [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Checking if we can resize image /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.356 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.383 189283 DEBUG oslo_concurrency.processutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.384 189283 DEBUG oslo_concurrency.processutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.418 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.420 189283 DEBUG nova.virt.disk.api [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Cannot resize image /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.420 189283 DEBUG nova.objects.instance [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lazy-loading 'migration_context' on Instance uuid 63639261-d8d9-46e1-8b3f-55af36a85e58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.435 189283 DEBUG oslo_concurrency.processutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk 1073741824" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.436 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.436 189283 DEBUG oslo_concurrency.processutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.458 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.470 189283 DEBUG oslo_concurrency.processutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.491 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.492 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Ensure instance console log exists: /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.493 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.493 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.493 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.506 189283 DEBUG oslo_concurrency.processutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.506 189283 DEBUG nova.virt.disk.api [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Checking if we can resize image /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.507 189283 DEBUG oslo_concurrency.processutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.535 189283 DEBUG oslo_concurrency.processutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.536 189283 DEBUG oslo_concurrency.processutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.569 189283 DEBUG oslo_concurrency.processutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.571 189283 DEBUG nova.virt.disk.api [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Cannot resize image /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.572 189283 DEBUG nova.objects.instance [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lazy-loading 'migration_context' on Instance uuid 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.586 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.586 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Ensure instance console log exists: /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.587 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.588 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.589 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.590 189283 DEBUG oslo_concurrency.processutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk 1073741824" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.591 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.591 189283 DEBUG oslo_concurrency.processutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.654 189283 DEBUG oslo_concurrency.processutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.655 189283 DEBUG nova.virt.disk.api [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Checking if we can resize image /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.656 189283 DEBUG oslo_concurrency.processutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.721 189283 DEBUG oslo_concurrency.processutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.722 189283 DEBUG nova.virt.disk.api [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Cannot resize image /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.723 189283 DEBUG nova.objects.instance [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lazy-loading 'migration_context' on Instance uuid 81f60881-4334-4ede-a10d-454a7e8a4154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.740 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.740 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Ensure instance console log exists: /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.741 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.742 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:53 compute-0 nova_compute[189279]: 2025-12-10 20:12:53.742 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:54 compute-0 nova_compute[189279]: 2025-12-10 20:12:54.017 189283 DEBUG nova.network.neutron [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Successfully updated port: fd5af3d6-f054-4886-9ca7-2888772def6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 20:12:54 compute-0 nova_compute[189279]: 2025-12-10 20:12:54.037 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Acquiring lock "refresh_cache-89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:12:54 compute-0 nova_compute[189279]: 2025-12-10 20:12:54.037 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Acquired lock "refresh_cache-89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:12:54 compute-0 nova_compute[189279]: 2025-12-10 20:12:54.038 189283 DEBUG nova.network.neutron [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:12:54 compute-0 nova_compute[189279]: 2025-12-10 20:12:54.486 189283 DEBUG nova.network.neutron [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 20:12:54 compute-0 nova_compute[189279]: 2025-12-10 20:12:54.929 189283 DEBUG nova.compute.manager [req-3c2ca505-da01-49a0-bc1d-411997d1f44e req-572f11eb-faa5-4c6a-a912-1f0197556ed1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Received event network-changed-fd5af3d6-f054-4886-9ca7-2888772def6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:12:54 compute-0 nova_compute[189279]: 2025-12-10 20:12:54.930 189283 DEBUG nova.compute.manager [req-3c2ca505-da01-49a0-bc1d-411997d1f44e req-572f11eb-faa5-4c6a-a912-1f0197556ed1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Refreshing instance network info cache due to event network-changed-fd5af3d6-f054-4886-9ca7-2888772def6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:12:54 compute-0 nova_compute[189279]: 2025-12-10 20:12:54.931 189283 DEBUG oslo_concurrency.lockutils [req-3c2ca505-da01-49a0-bc1d-411997d1f44e req-572f11eb-faa5-4c6a-a912-1f0197556ed1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.044 189283 DEBUG nova.network.neutron [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Successfully updated port: a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.061 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquiring lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.062 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquired lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.062 189283 DEBUG nova.network.neutron [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.346 189283 DEBUG nova.network.neutron [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Successfully created port: 42ea5f6d-dd00-4169-8385-3b8709530411 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.447 189283 DEBUG nova.network.neutron [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Updating instance_info_cache with network_info: [{"id": "fd5af3d6-f054-4886-9ca7-2888772def6f", "address": "fa:16:3e:ae:2f:3e", "network": {"id": "da989677-bb1a-43bc-bbae-3ccb2693342f", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-952491667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "713d58cceef640c38aa99b2cb5aafd50", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd5af3d6-f0", "ovs_interfaceid": "fd5af3d6-f054-4886-9ca7-2888772def6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.487 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Releasing lock "refresh_cache-89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.488 189283 DEBUG nova.compute.manager [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Instance network_info: |[{"id": "fd5af3d6-f054-4886-9ca7-2888772def6f", "address": "fa:16:3e:ae:2f:3e", "network": {"id": "da989677-bb1a-43bc-bbae-3ccb2693342f", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-952491667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "713d58cceef640c38aa99b2cb5aafd50", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd5af3d6-f0", "ovs_interfaceid": "fd5af3d6-f054-4886-9ca7-2888772def6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.488 189283 DEBUG oslo_concurrency.lockutils [req-3c2ca505-da01-49a0-bc1d-411997d1f44e req-572f11eb-faa5-4c6a-a912-1f0197556ed1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.488 189283 DEBUG nova.network.neutron [req-3c2ca505-da01-49a0-bc1d-411997d1f44e req-572f11eb-faa5-4c6a-a912-1f0197556ed1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Refreshing network info cache for port fd5af3d6-f054-4886-9ca7-2888772def6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.491 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Start _get_guest_xml network_info=[{"id": "fd5af3d6-f054-4886-9ca7-2888772def6f", "address": "fa:16:3e:ae:2f:3e", "network": {"id": "da989677-bb1a-43bc-bbae-3ccb2693342f", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-952491667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "713d58cceef640c38aa99b2cb5aafd50", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd5af3d6-f0", "ovs_interfaceid": "fd5af3d6-f054-4886-9ca7-2888772def6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '33b11153-486b-4d32-bc63-6b6a6ed0b704'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.496 189283 DEBUG nova.network.neutron [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.503 189283 WARNING nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.509 189283 DEBUG nova.virt.libvirt.host [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.510 189283 DEBUG nova.virt.libvirt.host [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.519 189283 DEBUG nova.virt.libvirt.host [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.519 189283 DEBUG nova.virt.libvirt.host [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.520 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.520 189283 DEBUG nova.virt.hardware [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T20:11:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.521 189283 DEBUG nova.virt.hardware [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.521 189283 DEBUG nova.virt.hardware [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.521 189283 DEBUG nova.virt.hardware [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.521 189283 DEBUG nova.virt.hardware [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.522 189283 DEBUG nova.virt.hardware [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.522 189283 DEBUG nova.virt.hardware [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.522 189283 DEBUG nova.virt.hardware [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.523 189283 DEBUG nova.virt.hardware [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.523 189283 DEBUG nova.virt.hardware [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.523 189283 DEBUG nova.virt.hardware [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.527 189283 DEBUG nova.virt.libvirt.vif [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:12:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-250151050',display_name='tempest-ServersTestManualDisk-server-250151050',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-250151050',id=8,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTnhRVtOkaEB43hj3b9xkLF/AS5xBqt91JJz2md3hTIC1ctHaB2qLQgFSk1Zu6ZyPqHY7WWH8JPI6LRwH7YTWJ/DZ4DmtLklE1lfKyxzq1OGuzJ+13jtKao+VNcvaCVzA==',key_name='tempest-keypair-1590776143',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='713d58cceef640c38aa99b2cb5aafd50',ramdisk_id='',reservation_id='r-21nhx334',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-301505012',owner_user_name='tempest-ServersTestManualDisk-301505012-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:12:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb4b85bd92294252be8009eb039aa323',uuid=89dd49b4-ab03-4bc5-84ea-a2ae3b040e06,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd5af3d6-f054-4886-9ca7-2888772def6f", "address": "fa:16:3e:ae:2f:3e", "network": {"id": "da989677-bb1a-43bc-bbae-3ccb2693342f", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-952491667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "713d58cceef640c38aa99b2cb5aafd50", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd5af3d6-f0", "ovs_interfaceid": "fd5af3d6-f054-4886-9ca7-2888772def6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.527 189283 DEBUG nova.network.os_vif_util [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Converting VIF {"id": "fd5af3d6-f054-4886-9ca7-2888772def6f", "address": "fa:16:3e:ae:2f:3e", "network": {"id": "da989677-bb1a-43bc-bbae-3ccb2693342f", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-952491667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "713d58cceef640c38aa99b2cb5aafd50", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd5af3d6-f0", "ovs_interfaceid": "fd5af3d6-f054-4886-9ca7-2888772def6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.528 189283 DEBUG nova.network.os_vif_util [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:2f:3e,bridge_name='br-int',has_traffic_filtering=True,id=fd5af3d6-f054-4886-9ca7-2888772def6f,network=Network(da989677-bb1a-43bc-bbae-3ccb2693342f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd5af3d6-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.529 189283 DEBUG nova.objects.instance [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lazy-loading 'pci_devices' on Instance uuid 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.553 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] End _get_guest_xml xml=<domain type="kvm">
Dec 10 20:12:55 compute-0 nova_compute[189279]:   <uuid>89dd49b4-ab03-4bc5-84ea-a2ae3b040e06</uuid>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   <name>instance-00000008</name>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   <memory>131072</memory>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   <metadata>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <nova:name>tempest-ServersTestManualDisk-server-250151050</nova:name>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 20:12:55</nova:creationTime>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <nova:flavor name="m1.nano">
Dec 10 20:12:55 compute-0 nova_compute[189279]:         <nova:memory>128</nova:memory>
Dec 10 20:12:55 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 20:12:55 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 20:12:55 compute-0 nova_compute[189279]:         <nova:ephemeral>0</nova:ephemeral>
Dec 10 20:12:55 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 20:12:55 compute-0 nova_compute[189279]:         <nova:user uuid="eb4b85bd92294252be8009eb039aa323">tempest-ServersTestManualDisk-301505012-project-member</nova:user>
Dec 10 20:12:55 compute-0 nova_compute[189279]:         <nova:project uuid="713d58cceef640c38aa99b2cb5aafd50">tempest-ServersTestManualDisk-301505012</nova:project>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="33b11153-486b-4d32-bc63-6b6a6ed0b704"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 20:12:55 compute-0 nova_compute[189279]:         <nova:port uuid="fd5af3d6-f054-4886-9ca7-2888772def6f">
Dec 10 20:12:55 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   </metadata>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <system>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <entry name="serial">89dd49b4-ab03-4bc5-84ea-a2ae3b040e06</entry>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <entry name="uuid">89dd49b4-ab03-4bc5-84ea-a2ae3b040e06</entry>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     </system>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   <os>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   </os>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   <features>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <apic/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   </features>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   </clock>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   </cpu>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   <devices>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk.config"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:ae:2f:3e"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <target dev="tapfd5af3d6-f0"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     </interface>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/console.log" append="off"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     </serial>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <video>
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     </video>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     </rng>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 20:12:55 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 20:12:55 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 20:12:55 compute-0 nova_compute[189279]:   </devices>
Dec 10 20:12:55 compute-0 nova_compute[189279]: </domain>
Dec 10 20:12:55 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.554 189283 DEBUG nova.compute.manager [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Preparing to wait for external event network-vif-plugged-fd5af3d6-f054-4886-9ca7-2888772def6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.554 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Acquiring lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.555 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.555 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.556 189283 DEBUG nova.virt.libvirt.vif [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:12:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-250151050',display_name='tempest-ServersTestManualDisk-server-250151050',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-250151050',id=8,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTnhRVtOkaEB43hj3b9xkLF/AS5xBqt91JJz2md3hTIC1ctHaB2qLQgFSk1Zu6ZyPqHY7WWH8JPI6LRwH7YTWJ/DZ4DmtLklE1lfKyxzq1OGuzJ+13jtKao+VNcvaCVzA==',key_name='tempest-keypair-1590776143',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='713d58cceef640c38aa99b2cb5aafd50',ramdisk_id='',reservation_id='r-21nhx334',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-301505012',owner_user_name='tempest-ServersTestManualDisk-301505012-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:12:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb4b85bd92294252be8009eb039aa323',uuid=89dd49b4-ab03-4bc5-84ea-a2ae3b040e06,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd5af3d6-f054-4886-9ca7-2888772def6f", "address": "fa:16:3e:ae:2f:3e", "network": {"id": "da989677-bb1a-43bc-bbae-3ccb2693342f", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-952491667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "713d58cceef640c38aa99b2cb5aafd50", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd5af3d6-f0", "ovs_interfaceid": "fd5af3d6-f054-4886-9ca7-2888772def6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.556 189283 DEBUG nova.network.os_vif_util [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Converting VIF {"id": "fd5af3d6-f054-4886-9ca7-2888772def6f", "address": "fa:16:3e:ae:2f:3e", "network": {"id": "da989677-bb1a-43bc-bbae-3ccb2693342f", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-952491667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "713d58cceef640c38aa99b2cb5aafd50", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd5af3d6-f0", "ovs_interfaceid": "fd5af3d6-f054-4886-9ca7-2888772def6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.557 189283 DEBUG nova.network.os_vif_util [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:2f:3e,bridge_name='br-int',has_traffic_filtering=True,id=fd5af3d6-f054-4886-9ca7-2888772def6f,network=Network(da989677-bb1a-43bc-bbae-3ccb2693342f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd5af3d6-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.557 189283 DEBUG os_vif [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:2f:3e,bridge_name='br-int',has_traffic_filtering=True,id=fd5af3d6-f054-4886-9ca7-2888772def6f,network=Network(da989677-bb1a-43bc-bbae-3ccb2693342f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd5af3d6-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.558 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.558 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.559 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.562 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.562 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd5af3d6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.563 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd5af3d6-f0, col_values=(('external_ids', {'iface-id': 'fd5af3d6-f054-4886-9ca7-2888772def6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:2f:3e', 'vm-uuid': '89dd49b4-ab03-4bc5-84ea-a2ae3b040e06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.565 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:55 compute-0 NetworkManager[56238]: <info>  [1765397575.5664] manager: (tapfd5af3d6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.567 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.577 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.578 189283 INFO os_vif [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:2f:3e,bridge_name='br-int',has_traffic_filtering=True,id=fd5af3d6-f054-4886-9ca7-2888772def6f,network=Network(da989677-bb1a-43bc-bbae-3ccb2693342f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd5af3d6-f0')
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.642 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.642 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.643 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] No VIF found with MAC fa:16:3e:ae:2f:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 20:12:55 compute-0 nova_compute[189279]: 2025-12-10 20:12:55.643 189283 INFO nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Using config drive
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.074 189283 INFO nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Creating config drive at /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk.config
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.087 189283 DEBUG oslo_concurrency.processutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpblpp_q8t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.156 189283 DEBUG nova.compute.manager [req-201398c6-ee3a-4058-a791-197be4d745f1 req-9c902e4a-b3ba-4d66-a1c8-5c13d1807312 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received event network-changed-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.157 189283 DEBUG nova.compute.manager [req-201398c6-ee3a-4058-a791-197be4d745f1 req-9c902e4a-b3ba-4d66-a1c8-5c13d1807312 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Refreshing instance network info cache due to event network-changed-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.158 189283 DEBUG oslo_concurrency.lockutils [req-201398c6-ee3a-4058-a791-197be4d745f1 req-9c902e4a-b3ba-4d66-a1c8-5c13d1807312 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.217 189283 DEBUG oslo_concurrency.processutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpblpp_q8t" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:56 compute-0 sshd-session[249018]: Connection closed by 45.148.10.121 port 51596 [preauth]
Dec 10 20:12:56 compute-0 kernel: tapfd5af3d6-f0: entered promiscuous mode
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.299 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:56 compute-0 NetworkManager[56238]: <info>  [1765397576.3015] manager: (tapfd5af3d6-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Dec 10 20:12:56 compute-0 ovn_controller[97701]: 2025-12-10T20:12:56Z|00081|binding|INFO|Claiming lport fd5af3d6-f054-4886-9ca7-2888772def6f for this chassis.
Dec 10 20:12:56 compute-0 ovn_controller[97701]: 2025-12-10T20:12:56Z|00082|binding|INFO|fd5af3d6-f054-4886-9ca7-2888772def6f: Claiming fa:16:3e:ae:2f:3e 10.100.0.3
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.311 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:2f:3e 10.100.0.3'], port_security=['fa:16:3e:ae:2f:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '89dd49b4-ab03-4bc5-84ea-a2ae3b040e06', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da989677-bb1a-43bc-bbae-3ccb2693342f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '713d58cceef640c38aa99b2cb5aafd50', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9fb4b7bb-1225-4165-96aa-4dc39a1eec29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6784e28d-40b3-49b1-a2c7-0fca40fd4894, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=fd5af3d6-f054-4886-9ca7-2888772def6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.312 106564 INFO neutron.agent.ovn.metadata.agent [-] Port fd5af3d6-f054-4886-9ca7-2888772def6f in datapath da989677-bb1a-43bc-bbae-3ccb2693342f bound to our chassis
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.314 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network da989677-bb1a-43bc-bbae-3ccb2693342f
Dec 10 20:12:56 compute-0 ovn_controller[97701]: 2025-12-10T20:12:56Z|00083|binding|INFO|Setting lport fd5af3d6-f054-4886-9ca7-2888772def6f up in Southbound
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.328 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.328 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[0574268e-ef2f-4fc5-a4f0-10c254181931]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.329 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapda989677-b1 in ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 10 20:12:56 compute-0 ovn_controller[97701]: 2025-12-10T20:12:56Z|00084|binding|INFO|Setting lport fd5af3d6-f054-4886-9ca7-2888772def6f ovn-installed in OVS
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.332 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.332 239384 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapda989677-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.332 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[add97da2-2b14-4171-bbd7-83e0f4676a6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.333 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[04fbebb6-b8b9-44de-96e4-c1ad6b4cbefa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 systemd-machined[155642]: New machine qemu-7-instance-00000008.
Dec 10 20:12:56 compute-0 systemd-udevd[249039]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.346 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f17b5a-f08b-4241-a964-029b370cecf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000008.
Dec 10 20:12:56 compute-0 NetworkManager[56238]: <info>  [1765397576.3600] device (tapfd5af3d6-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 20:12:56 compute-0 NetworkManager[56238]: <info>  [1765397576.3638] device (tapfd5af3d6-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.374 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0640dd-2466-4ee4-9ec8-454897cc3e63]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.402 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1a37e4-aafa-444a-b281-1dbe1f844a6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 systemd-udevd[249042]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.412 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[b942489c-9b70-4a28-b54c-32810526b74e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 NetworkManager[56238]: <info>  [1765397576.4150] manager: (tapda989677-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/38)
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.455 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac50603-5347-431a-8ecd-1f95a9eff565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.459 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[fb593b27-4cd1-4def-bcf3-151674ad6562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:12:56 compute-0 NetworkManager[56238]: <info>  [1765397576.4917] device (tapda989677-b0): carrier: link connected
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.499 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[e363e442-a526-4125-91a1-08c9d0578d4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.509 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.523 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[9bcbb662-703c-4f84-8ec7-22ad586743e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda989677-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:f7:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490514, 'reachable_time': 25664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249070, 'error': None, 'target': 'ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.530 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.531 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.541 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[eee68661-bff3-4848-ab1e-5b0a5f42b4f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:f76e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490514, 'tstamp': 490514}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249071, 'error': None, 'target': 'ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.560 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[453952a3-fbdf-473f-a062-cc3c31116c81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda989677-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:f7:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490514, 'reachable_time': 25664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249072, 'error': None, 'target': 'ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.590 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[4860955f-6250-4684-a60b-96627bf67bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.650 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[de064570-7aa4-450d-8d7b-d90ddf0a8d70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.652 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda989677-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.653 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.653 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda989677-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:56 compute-0 NetworkManager[56238]: <info>  [1765397576.6564] manager: (tapda989677-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.656 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:56 compute-0 kernel: tapda989677-b0: entered promiscuous mode
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.658 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.660 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapda989677-b0, col_values=(('external_ids', {'iface-id': '1a9a5ff2-c47c-4dcb-ad70-b3bd475c5281'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.662 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:56 compute-0 ovn_controller[97701]: 2025-12-10T20:12:56Z|00085|binding|INFO|Releasing lport 1a9a5ff2-c47c-4dcb-ad70-b3bd475c5281 from this chassis (sb_readonly=0)
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.663 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.666 106564 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/da989677-bb1a-43bc-bbae-3ccb2693342f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/da989677-bb1a-43bc-bbae-3ccb2693342f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.668 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[22acf4c6-85c6-4a15-8a72-c88b70f5994e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.668 106564 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: global
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     log         /dev/log local0 debug
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     log-tag     haproxy-metadata-proxy-da989677-bb1a-43bc-bbae-3ccb2693342f
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     user        root
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     group       root
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     maxconn     1024
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     pidfile     /var/lib/neutron/external/pids/da989677-bb1a-43bc-bbae-3ccb2693342f.pid.haproxy
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     daemon
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: defaults
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     log global
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     mode http
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     option httplog
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     option dontlognull
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     option http-server-close
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     option forwardfor
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     retries                 3
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     timeout http-request    30s
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     timeout connect         30s
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     timeout client          32s
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     timeout server          32s
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     timeout http-keep-alive 30s
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: listen listener
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     bind 169.254.169.254:80
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     server metadata /var/lib/neutron/metadata_proxy
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:     http-request add-header X-OVN-Network-ID da989677-bb1a-43bc-bbae-3ccb2693342f
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 10 20:12:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:56.670 106564 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f', 'env', 'PROCESS_TAG=haproxy-da989677-bb1a-43bc-bbae-3ccb2693342f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/da989677-bb1a-43bc-bbae-3ccb2693342f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.674 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.694 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397576.6942003, 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.695 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] VM Started (Lifecycle Event)
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.732 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.738 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397576.6950746, 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.738 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] VM Paused (Lifecycle Event)
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.759 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.764 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.782 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:12:56 compute-0 nova_compute[189279]: 2025-12-10 20:12:56.945 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:57 compute-0 podman[249110]: 2025-12-10 20:12:57.054159102 +0000 UTC m=+0.050019763 container create f3f16e858be062a1b2daef2d20e6b59aaaece526001dd9bd47dd26b9dea47758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 10 20:12:57 compute-0 systemd[1]: Started libpod-conmon-f3f16e858be062a1b2daef2d20e6b59aaaece526001dd9bd47dd26b9dea47758.scope.
Dec 10 20:12:57 compute-0 systemd[1]: Started libcrun container.
Dec 10 20:12:57 compute-0 podman[249110]: 2025-12-10 20:12:57.025945449 +0000 UTC m=+0.021806130 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 10 20:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265f3160df4d4b73600bee6d2b1af55ee6f15d2ea2888b31c18f395969555510/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 10 20:12:57 compute-0 podman[249110]: 2025-12-10 20:12:57.141987286 +0000 UTC m=+0.137847977 container init f3f16e858be062a1b2daef2d20e6b59aaaece526001dd9bd47dd26b9dea47758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:12:57 compute-0 podman[249110]: 2025-12-10 20:12:57.148861862 +0000 UTC m=+0.144722523 container start f3f16e858be062a1b2daef2d20e6b59aaaece526001dd9bd47dd26b9dea47758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 10 20:12:57 compute-0 neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f[249124]: [NOTICE]   (249128) : New worker (249130) forked
Dec 10 20:12:57 compute-0 neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f[249124]: [NOTICE]   (249128) : Loading success.
Dec 10 20:12:57 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:57.200 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.514 189283 DEBUG nova.network.neutron [req-3c2ca505-da01-49a0-bc1d-411997d1f44e req-572f11eb-faa5-4c6a-a912-1f0197556ed1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Updated VIF entry in instance network info cache for port fd5af3d6-f054-4886-9ca7-2888772def6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.515 189283 DEBUG nova.network.neutron [req-3c2ca505-da01-49a0-bc1d-411997d1f44e req-572f11eb-faa5-4c6a-a912-1f0197556ed1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Updating instance_info_cache with network_info: [{"id": "fd5af3d6-f054-4886-9ca7-2888772def6f", "address": "fa:16:3e:ae:2f:3e", "network": {"id": "da989677-bb1a-43bc-bbae-3ccb2693342f", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-952491667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "713d58cceef640c38aa99b2cb5aafd50", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd5af3d6-f0", "ovs_interfaceid": "fd5af3d6-f054-4886-9ca7-2888772def6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.533 189283 DEBUG oslo_concurrency.lockutils [req-3c2ca505-da01-49a0-bc1d-411997d1f44e req-572f11eb-faa5-4c6a-a912-1f0197556ed1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.643 189283 DEBUG nova.network.neutron [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Updating instance_info_cache with network_info: [{"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.663 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Releasing lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.664 189283 DEBUG nova.compute.manager [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Instance network_info: |[{"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.665 189283 DEBUG oslo_concurrency.lockutils [req-201398c6-ee3a-4058-a791-197be4d745f1 req-9c902e4a-b3ba-4d66-a1c8-5c13d1807312 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.666 189283 DEBUG nova.network.neutron [req-201398c6-ee3a-4058-a791-197be4d745f1 req-9c902e4a-b3ba-4d66-a1c8-5c13d1807312 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Refreshing network info cache for port a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.669 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Start _get_guest_xml network_info=[{"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '33b11153-486b-4d32-bc63-6b6a6ed0b704'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.677 189283 WARNING nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.688 189283 DEBUG nova.virt.libvirt.host [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.689 189283 DEBUG nova.virt.libvirt.host [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.696 189283 DEBUG nova.virt.libvirt.host [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.697 189283 DEBUG nova.virt.libvirt.host [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.697 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.698 189283 DEBUG nova.virt.hardware [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T20:11:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.698 189283 DEBUG nova.virt.hardware [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.699 189283 DEBUG nova.virt.hardware [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.699 189283 DEBUG nova.virt.hardware [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.700 189283 DEBUG nova.virt.hardware [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.700 189283 DEBUG nova.virt.hardware [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.701 189283 DEBUG nova.virt.hardware [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.701 189283 DEBUG nova.virt.hardware [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.702 189283 DEBUG nova.virt.hardware [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.702 189283 DEBUG nova.virt.hardware [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.703 189283 DEBUG nova.virt.hardware [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.708 189283 DEBUG nova.virt.libvirt.vif [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:12:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1460650199',display_name='tempest-ServerActionsTestJSON-server-1460650199',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1460650199',id=7,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNMZ4vtRw7tBhuM4o6MjvfbKNBIl4FQd4G6qFZVFfMRp+DuluVXm6EdlnooCaRI1wwhsIBxXE3togl4a//g9wsD+ZeM3HnXvIhtkdJ8sJuoGMY7C3lFqm65C06eytVKJQw==',key_name='tempest-keypair-71097797',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2e63db29894648c7a06ef3bcb4b98768',ramdisk_id='',reservation_id='r-1tl971la',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-53104742',owner_user_name='tempest-ServerActionsTestJSON-53104742-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:12:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0c9cd4059c654dd4947e252e9f3acf85',uuid=63639261-d8d9-46e1-8b3f-55af36a85e58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.708 189283 DEBUG nova.network.os_vif_util [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Converting VIF {"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.709 189283 DEBUG nova.network.os_vif_util [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:b0:0b,bridge_name='br-int',has_traffic_filtering=True,id=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1,network=Network(77ecefb2-de1d-4471-80a0-8f797ab99021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f4e290-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.711 189283 DEBUG nova.objects.instance [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lazy-loading 'pci_devices' on Instance uuid 63639261-d8d9-46e1-8b3f-55af36a85e58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.724 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] End _get_guest_xml xml=<domain type="kvm">
Dec 10 20:12:57 compute-0 nova_compute[189279]:   <uuid>63639261-d8d9-46e1-8b3f-55af36a85e58</uuid>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   <name>instance-00000007</name>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   <memory>131072</memory>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   <metadata>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <nova:name>tempest-ServerActionsTestJSON-server-1460650199</nova:name>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 20:12:57</nova:creationTime>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <nova:flavor name="m1.nano">
Dec 10 20:12:57 compute-0 nova_compute[189279]:         <nova:memory>128</nova:memory>
Dec 10 20:12:57 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 20:12:57 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 20:12:57 compute-0 nova_compute[189279]:         <nova:ephemeral>0</nova:ephemeral>
Dec 10 20:12:57 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 20:12:57 compute-0 nova_compute[189279]:         <nova:user uuid="0c9cd4059c654dd4947e252e9f3acf85">tempest-ServerActionsTestJSON-53104742-project-member</nova:user>
Dec 10 20:12:57 compute-0 nova_compute[189279]:         <nova:project uuid="2e63db29894648c7a06ef3bcb4b98768">tempest-ServerActionsTestJSON-53104742</nova:project>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="33b11153-486b-4d32-bc63-6b6a6ed0b704"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 20:12:57 compute-0 nova_compute[189279]:         <nova:port uuid="a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1">
Dec 10 20:12:57 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   </metadata>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <system>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <entry name="serial">63639261-d8d9-46e1-8b3f-55af36a85e58</entry>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <entry name="uuid">63639261-d8d9-46e1-8b3f-55af36a85e58</entry>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     </system>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   <os>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   </os>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   <features>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <apic/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   </features>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   </clock>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   </cpu>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   <devices>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk.config"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:f8:b0:0b"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <target dev="tapa0f4e290-5b"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     </interface>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/console.log" append="off"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     </serial>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <video>
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     </video>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     </rng>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 20:12:57 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 20:12:57 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 20:12:57 compute-0 nova_compute[189279]:   </devices>
Dec 10 20:12:57 compute-0 nova_compute[189279]: </domain>
Dec 10 20:12:57 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.725 189283 DEBUG nova.compute.manager [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Preparing to wait for external event network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.726 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquiring lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.727 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.727 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.728 189283 DEBUG nova.virt.libvirt.vif [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:12:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1460650199',display_name='tempest-ServerActionsTestJSON-server-1460650199',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1460650199',id=7,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNMZ4vtRw7tBhuM4o6MjvfbKNBIl4FQd4G6qFZVFfMRp+DuluVXm6EdlnooCaRI1wwhsIBxXE3togl4a//g9wsD+ZeM3HnXvIhtkdJ8sJuoGMY7C3lFqm65C06eytVKJQw==',key_name='tempest-keypair-71097797',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2e63db29894648c7a06ef3bcb4b98768',ramdisk_id='',reservation_id='r-1tl971la',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-53104742',owner_user_name='tempest-ServerActionsTestJSON-53104742-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:12:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0c9cd4059c654dd4947e252e9f3acf85',uuid=63639261-d8d9-46e1-8b3f-55af36a85e58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.728 189283 DEBUG nova.network.os_vif_util [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Converting VIF {"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.729 189283 DEBUG nova.network.os_vif_util [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:b0:0b,bridge_name='br-int',has_traffic_filtering=True,id=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1,network=Network(77ecefb2-de1d-4471-80a0-8f797ab99021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f4e290-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.730 189283 DEBUG os_vif [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:b0:0b,bridge_name='br-int',has_traffic_filtering=True,id=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1,network=Network(77ecefb2-de1d-4471-80a0-8f797ab99021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f4e290-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.730 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.731 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.731 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.736 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.736 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0f4e290-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.737 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0f4e290-5b, col_values=(('external_ids', {'iface-id': 'a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:b0:0b', 'vm-uuid': '63639261-d8d9-46e1-8b3f-55af36a85e58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:57 compute-0 NetworkManager[56238]: <info>  [1765397577.7405] manager: (tapa0f4e290-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.742 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.756 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.757 189283 INFO os_vif [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:b0:0b,bridge_name='br-int',has_traffic_filtering=True,id=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1,network=Network(77ecefb2-de1d-4471-80a0-8f797ab99021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f4e290-5b')
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.842 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.842 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.843 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] No VIF found with MAC fa:16:3e:f8:b0:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.844 189283 INFO nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Using config drive
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.867 189283 DEBUG nova.network.neutron [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Successfully updated port: 42ea5f6d-dd00-4169-8385-3b8709530411 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.881 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquiring lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.881 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquired lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:12:57 compute-0 nova_compute[189279]: 2025-12-10 20:12:57.882 189283 DEBUG nova.network.neutron [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:12:58 compute-0 nova_compute[189279]: 2025-12-10 20:12:58.112 189283 DEBUG nova.network.neutron [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 20:12:58 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 10 20:12:58 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 10 20:12:58 compute-0 podman[249142]: 2025-12-10 20:12:58.473570608 +0000 UTC m=+0.077384693 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 10 20:12:58 compute-0 nova_compute[189279]: 2025-12-10 20:12:58.482 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:12:58 compute-0 nova_compute[189279]: 2025-12-10 20:12:58.569 189283 DEBUG nova.compute.manager [req-321b340b-2ece-44f8-92fc-065f461186ac req-e395bb4a-eb1d-40cc-87b4-b8112b9232cc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Received event network-changed-42ea5f6d-dd00-4169-8385-3b8709530411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:12:58 compute-0 nova_compute[189279]: 2025-12-10 20:12:58.570 189283 DEBUG nova.compute.manager [req-321b340b-2ece-44f8-92fc-065f461186ac req-e395bb4a-eb1d-40cc-87b4-b8112b9232cc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Refreshing instance network info cache due to event network-changed-42ea5f6d-dd00-4169-8385-3b8709530411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:12:58 compute-0 nova_compute[189279]: 2025-12-10 20:12:58.570 189283 DEBUG oslo_concurrency.lockutils [req-321b340b-2ece-44f8-92fc-065f461186ac req-e395bb4a-eb1d-40cc-87b4-b8112b9232cc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:12:58 compute-0 nova_compute[189279]: 2025-12-10 20:12:58.616 189283 INFO nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Creating config drive at /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk.config
Dec 10 20:12:58 compute-0 nova_compute[189279]: 2025-12-10 20:12:58.621 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8b4kkhnt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:12:58 compute-0 nova_compute[189279]: 2025-12-10 20:12:58.746 189283 DEBUG oslo_concurrency.processutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8b4kkhnt" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:12:58 compute-0 kernel: tapa0f4e290-5b: entered promiscuous mode
Dec 10 20:12:58 compute-0 NetworkManager[56238]: <info>  [1765397578.8036] manager: (tapa0f4e290-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Dec 10 20:12:58 compute-0 systemd-udevd[249061]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:12:58 compute-0 ovn_controller[97701]: 2025-12-10T20:12:58Z|00086|binding|INFO|Claiming lport a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 for this chassis.
Dec 10 20:12:58 compute-0 ovn_controller[97701]: 2025-12-10T20:12:58Z|00087|binding|INFO|a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1: Claiming fa:16:3e:f8:b0:0b 10.100.0.8
Dec 10 20:12:58 compute-0 nova_compute[189279]: 2025-12-10 20:12:58.807 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:58 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:58.817 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:b0:0b 10.100.0.8'], port_security=['fa:16:3e:f8:b0:0b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '63639261-d8d9-46e1-8b3f-55af36a85e58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77ecefb2-de1d-4471-80a0-8f797ab99021', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e63db29894648c7a06ef3bcb4b98768', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e991cb1-ab23-4fa3-b4b6-83b24087f30e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b611bc6-8b69-4351-a79d-b310ec70a551, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:12:58 compute-0 NetworkManager[56238]: <info>  [1765397578.8199] device (tapa0f4e290-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 20:12:58 compute-0 NetworkManager[56238]: <info>  [1765397578.8210] device (tapa0f4e290-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 20:12:58 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:58.818 106564 INFO neutron.agent.ovn.metadata.agent [-] Port a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 in datapath 77ecefb2-de1d-4471-80a0-8f797ab99021 bound to our chassis
Dec 10 20:12:58 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:58.821 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77ecefb2-de1d-4471-80a0-8f797ab99021
Dec 10 20:12:58 compute-0 nova_compute[189279]: 2025-12-10 20:12:58.824 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:58 compute-0 ovn_controller[97701]: 2025-12-10T20:12:58Z|00088|binding|INFO|Setting lport a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 ovn-installed in OVS
Dec 10 20:12:58 compute-0 ovn_controller[97701]: 2025-12-10T20:12:58Z|00089|binding|INFO|Setting lport a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 up in Southbound
Dec 10 20:12:58 compute-0 nova_compute[189279]: 2025-12-10 20:12:58.834 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:58 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:58.839 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c94088a9-3687-41ff-b2c9-8b609b05f5f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:58 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:58.840 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77ecefb2-d1 in ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 10 20:12:58 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:58.843 239384 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77ecefb2-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 10 20:12:58 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:58.843 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[83b30350-87a0-43ca-95b2-cd35d4f1bc04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:58 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:58.844 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[86833c97-27cb-4b79-b249-f632046ab2f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:58 compute-0 systemd-machined[155642]: New machine qemu-8-instance-00000007.
Dec 10 20:12:58 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000007.
Dec 10 20:12:58 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:58.864 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[e17e3f86-6f5f-4a6d-8dce-1ffa558ba5ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:58 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:58.893 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[f8cee1ba-d0d8-43a0-be55-b9e58947ac97]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:58 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:58.930 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[fa36f7d4-e815-49ea-b37b-fc9bcdfc31a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:58 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:58.936 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[774b6cac-806e-43b3-972a-4b9b318a9d3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:58 compute-0 NetworkManager[56238]: <info>  [1765397578.9373] manager: (tap77ecefb2-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/42)
Dec 10 20:12:58 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:58.971 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[c71b5f79-2a42-4fc5-9f3a-75c691f8c998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:58 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:58.975 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c049a1-2f4f-4b37-abac-e290a067a859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:58 compute-0 NetworkManager[56238]: <info>  [1765397578.9986] device (tap77ecefb2-d0): carrier: link connected
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:59.004 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[9487c90f-1e3d-46b3-907b-27421bde1209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:59.028 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[b5868f82-4bcc-4df3-bc42-8932658c13e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77ecefb2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:30:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490765, 'reachable_time': 28676, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249213, 'error': None, 'target': 'ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:59.046 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[a50a5dbb-ee05-493c-b629-947b9a124718]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:306b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490765, 'tstamp': 490765}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249216, 'error': None, 'target': 'ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:59.062 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce3d76e-a8a8-4a14-9ac8-08dd0e139567]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77ecefb2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:30:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490765, 'reachable_time': 28676, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249220, 'error': None, 'target': 'ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:59.093 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[25ded39a-49eb-46d1-a10f-30a7423b3537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:59.150 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d47ff3-4ef1-4b8a-b0a4-770787d49d2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:59.151 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77ecefb2-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:59.152 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:59.152 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77ecefb2-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:59 compute-0 NetworkManager[56238]: <info>  [1765397579.1552] manager: (tap77ecefb2-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.154 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:59 compute-0 kernel: tap77ecefb2-d0: entered promiscuous mode
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.158 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:59.159 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77ecefb2-d0, col_values=(('external_ids', {'iface-id': '2f9d87e3-f102-4fe2-b4d5-b25a5d31091b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.162 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:59 compute-0 ovn_controller[97701]: 2025-12-10T20:12:59Z|00090|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:59.165 106564 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77ecefb2-de1d-4471-80a0-8f797ab99021.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77ecefb2-de1d-4471-80a0-8f797ab99021.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:59.166 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f65cda-9ef0-4b60-bfef-3f7d4f874584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:59.167 106564 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: global
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     log         /dev/log local0 debug
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     log-tag     haproxy-metadata-proxy-77ecefb2-de1d-4471-80a0-8f797ab99021
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     user        root
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     group       root
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     maxconn     1024
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     pidfile     /var/lib/neutron/external/pids/77ecefb2-de1d-4471-80a0-8f797ab99021.pid.haproxy
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     daemon
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: defaults
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     log global
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     mode http
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     option httplog
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     option dontlognull
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     option http-server-close
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     option forwardfor
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     retries                 3
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     timeout http-request    30s
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     timeout connect         30s
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     timeout client          32s
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     timeout server          32s
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     timeout http-keep-alive 30s
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: listen listener
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     bind 169.254.169.254:80
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     server metadata /var/lib/neutron/metadata_proxy
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:     http-request add-header X-OVN-Network-ID 77ecefb2-de1d-4471-80a0-8f797ab99021
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 10 20:12:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:12:59.168 106564 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021', 'env', 'PROCESS_TAG=haproxy-77ecefb2-de1d-4471-80a0-8f797ab99021', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77ecefb2-de1d-4471-80a0-8f797ab99021.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.174 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.181 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397579.1808589, 63639261-d8d9-46e1-8b3f-55af36a85e58 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.182 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] VM Started (Lifecycle Event)
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.205 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.212 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397579.181, 63639261-d8d9-46e1-8b3f-55af36a85e58 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.212 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] VM Paused (Lifecycle Event)
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.231 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.235 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.257 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.457 189283 DEBUG nova.network.neutron [req-201398c6-ee3a-4058-a791-197be4d745f1 req-9c902e4a-b3ba-4d66-a1c8-5c13d1807312 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Updated VIF entry in instance network info cache for port a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.458 189283 DEBUG nova.network.neutron [req-201398c6-ee3a-4058-a791-197be4d745f1 req-9c902e4a-b3ba-4d66-a1c8-5c13d1807312 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Updating instance_info_cache with network_info: [{"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.493 189283 DEBUG oslo_concurrency.lockutils [req-201398c6-ee3a-4058-a791-197be4d745f1 req-9c902e4a-b3ba-4d66-a1c8-5c13d1807312 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:12:59 compute-0 podman[249253]: 2025-12-10 20:12:59.606231203 +0000 UTC m=+0.071017711 container create bb6d306e026d68071dd1a698c695e7cf5b703b552ef3eb5efb9af644eb6c84df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:12:59 compute-0 systemd[1]: Started libpod-conmon-bb6d306e026d68071dd1a698c695e7cf5b703b552ef3eb5efb9af644eb6c84df.scope.
Dec 10 20:12:59 compute-0 podman[249253]: 2025-12-10 20:12:59.569647874 +0000 UTC m=+0.034434412 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 10 20:12:59 compute-0 systemd[1]: Started libcrun container.
Dec 10 20:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66006f795d475f944dc9c57ec5cddbcb0f2be2c355de947796f5c9dda7c08028/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 10 20:12:59 compute-0 podman[249253]: 2025-12-10 20:12:59.705535507 +0000 UTC m=+0.170322025 container init bb6d306e026d68071dd1a698c695e7cf5b703b552ef3eb5efb9af644eb6c84df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:12:59 compute-0 podman[249253]: 2025-12-10 20:12:59.713014849 +0000 UTC m=+0.177801347 container start bb6d306e026d68071dd1a698c695e7cf5b703b552ef3eb5efb9af644eb6c84df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.728 189283 DEBUG nova.network.neutron [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Updating instance_info_cache with network_info: [{"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:12:59 compute-0 neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021[249268]: [NOTICE]   (249272) : New worker (249274) forked
Dec 10 20:12:59 compute-0 neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021[249268]: [NOTICE]   (249272) : Loading success.
Dec 10 20:12:59 compute-0 podman[203484]: time="2025-12-10T20:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.746 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Releasing lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.747 189283 DEBUG nova.compute.manager [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Instance network_info: |[{"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.748 189283 DEBUG oslo_concurrency.lockutils [req-321b340b-2ece-44f8-92fc-065f461186ac req-e395bb4a-eb1d-40cc-87b4-b8112b9232cc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.748 189283 DEBUG nova.network.neutron [req-321b340b-2ece-44f8-92fc-065f461186ac req-e395bb4a-eb1d-40cc-87b4-b8112b9232cc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Refreshing network info cache for port 42ea5f6d-dd00-4169-8385-3b8709530411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.751 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Start _get_guest_xml network_info=[{"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '33b11153-486b-4d32-bc63-6b6a6ed0b704'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 20:12:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30756 "" "Go-http-client/1.1"
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.769 189283 WARNING nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:12:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5264 "" "Go-http-client/1.1"
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.775 189283 DEBUG nova.virt.libvirt.host [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.776 189283 DEBUG nova.virt.libvirt.host [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.780 189283 DEBUG nova.virt.libvirt.host [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.780 189283 DEBUG nova.virt.libvirt.host [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.781 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.781 189283 DEBUG nova.virt.hardware [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T20:11:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.782 189283 DEBUG nova.virt.hardware [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.782 189283 DEBUG nova.virt.hardware [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.783 189283 DEBUG nova.virt.hardware [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.783 189283 DEBUG nova.virt.hardware [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.784 189283 DEBUG nova.virt.hardware [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.784 189283 DEBUG nova.virt.hardware [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.784 189283 DEBUG nova.virt.hardware [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.785 189283 DEBUG nova.virt.hardware [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.785 189283 DEBUG nova.virt.hardware [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.786 189283 DEBUG nova.virt.hardware [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.790 189283 DEBUG nova.virt.libvirt.vif [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:12:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-626488523',display_name='tempest-AttachInterfacesUnderV243Test-server-626488523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-626488523',id=9,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBP3hnXJJItrqC2tE+2StxWPo5v8r+cO2041o4z57viHydodhBc3A1F11lyuNnqZZJ0DkYUm7DSnNyDti0OpCRBDZ4I0oFVP9621ZbNz9EpBGBi3KR2K8iEQ9nH1cIH7JA==',key_name='tempest-keypair-945515570',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2505343710a74a61bea5fcb849a4b61b',ramdisk_id='',reservation_id='r-w316cjwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-663599908',owner_user_name='tempest-AttachInterfacesUnderV243Test-663599908-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:12:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9901235a2b1b4cf4b7a0d6fd53dd0396',uuid=81f60881-4334-4ede-a10d-454a7e8a4154,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.790 189283 DEBUG nova.network.os_vif_util [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Converting VIF {"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.791 189283 DEBUG nova.network.os_vif_util [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:c2:44,bridge_name='br-int',has_traffic_filtering=True,id=42ea5f6d-dd00-4169-8385-3b8709530411,network=Network(92918959-6e40-4a1a-9c11-463c49c96b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42ea5f6d-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.792 189283 DEBUG nova.objects.instance [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lazy-loading 'pci_devices' on Instance uuid 81f60881-4334-4ede-a10d-454a7e8a4154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.808 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] End _get_guest_xml xml=<domain type="kvm">
Dec 10 20:12:59 compute-0 nova_compute[189279]:   <uuid>81f60881-4334-4ede-a10d-454a7e8a4154</uuid>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   <name>instance-00000009</name>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   <memory>131072</memory>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   <metadata>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-626488523</nova:name>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 20:12:59</nova:creationTime>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <nova:flavor name="m1.nano">
Dec 10 20:12:59 compute-0 nova_compute[189279]:         <nova:memory>128</nova:memory>
Dec 10 20:12:59 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 20:12:59 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 20:12:59 compute-0 nova_compute[189279]:         <nova:ephemeral>0</nova:ephemeral>
Dec 10 20:12:59 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 20:12:59 compute-0 nova_compute[189279]:         <nova:user uuid="9901235a2b1b4cf4b7a0d6fd53dd0396">tempest-AttachInterfacesUnderV243Test-663599908-project-member</nova:user>
Dec 10 20:12:59 compute-0 nova_compute[189279]:         <nova:project uuid="2505343710a74a61bea5fcb849a4b61b">tempest-AttachInterfacesUnderV243Test-663599908</nova:project>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="33b11153-486b-4d32-bc63-6b6a6ed0b704"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 20:12:59 compute-0 nova_compute[189279]:         <nova:port uuid="42ea5f6d-dd00-4169-8385-3b8709530411">
Dec 10 20:12:59 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   </metadata>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <system>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <entry name="serial">81f60881-4334-4ede-a10d-454a7e8a4154</entry>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <entry name="uuid">81f60881-4334-4ede-a10d-454a7e8a4154</entry>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     </system>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   <os>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   </os>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   <features>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <apic/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   </features>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   </clock>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   </cpu>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   <devices>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk.config"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:cb:c2:44"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <target dev="tap42ea5f6d-dd"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     </interface>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/console.log" append="off"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     </serial>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <video>
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     </video>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     </rng>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 20:12:59 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 20:12:59 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 20:12:59 compute-0 nova_compute[189279]:   </devices>
Dec 10 20:12:59 compute-0 nova_compute[189279]: </domain>
Dec 10 20:12:59 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.809 189283 DEBUG nova.compute.manager [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Preparing to wait for external event network-vif-plugged-42ea5f6d-dd00-4169-8385-3b8709530411 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.810 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquiring lock "81f60881-4334-4ede-a10d-454a7e8a4154-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.810 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "81f60881-4334-4ede-a10d-454a7e8a4154-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.810 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "81f60881-4334-4ede-a10d-454a7e8a4154-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.811 189283 DEBUG nova.virt.libvirt.vif [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:12:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-626488523',display_name='tempest-AttachInterfacesUnderV243Test-server-626488523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-626488523',id=9,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBP3hnXJJItrqC2tE+2StxWPo5v8r+cO2041o4z57viHydodhBc3A1F11lyuNnqZZJ0DkYUm7DSnNyDti0OpCRBDZ4I0oFVP9621ZbNz9EpBGBi3KR2K8iEQ9nH1cIH7JA==',key_name='tempest-keypair-945515570',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2505343710a74a61bea5fcb849a4b61b',ramdisk_id='',reservation_id='r-w316cjwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-663599908',owner_user_name='tempest-AttachInterfacesUnderV243Test-663599908-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:12:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9901235a2b1b4cf4b7a0d6fd53dd0396',uuid=81f60881-4334-4ede-a10d-454a7e8a4154,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.811 189283 DEBUG nova.network.os_vif_util [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Converting VIF {"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.812 189283 DEBUG nova.network.os_vif_util [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:c2:44,bridge_name='br-int',has_traffic_filtering=True,id=42ea5f6d-dd00-4169-8385-3b8709530411,network=Network(92918959-6e40-4a1a-9c11-463c49c96b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42ea5f6d-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.812 189283 DEBUG os_vif [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:c2:44,bridge_name='br-int',has_traffic_filtering=True,id=42ea5f6d-dd00-4169-8385-3b8709530411,network=Network(92918959-6e40-4a1a-9c11-463c49c96b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42ea5f6d-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.813 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.813 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.813 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.816 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.816 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap42ea5f6d-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.817 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap42ea5f6d-dd, col_values=(('external_ids', {'iface-id': '42ea5f6d-dd00-4169-8385-3b8709530411', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:c2:44', 'vm-uuid': '81f60881-4334-4ede-a10d-454a7e8a4154'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.819 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:59 compute-0 NetworkManager[56238]: <info>  [1765397579.8206] manager: (tap42ea5f6d-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.821 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.826 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.827 189283 INFO os_vif [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:c2:44,bridge_name='br-int',has_traffic_filtering=True,id=42ea5f6d-dd00-4169-8385-3b8709530411,network=Network(92918959-6e40-4a1a-9c11-463c49c96b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42ea5f6d-dd')
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.873 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.874 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.874 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] No VIF found with MAC fa:16:3e:cb:c2:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 20:12:59 compute-0 nova_compute[189279]: 2025-12-10 20:12:59.875 189283 INFO nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Using config drive
Dec 10 20:13:00 compute-0 nova_compute[189279]: 2025-12-10 20:13:00.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:13:00 compute-0 nova_compute[189279]: 2025-12-10 20:13:00.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:13:00 compute-0 nova_compute[189279]: 2025-12-10 20:13:00.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:13:00 compute-0 nova_compute[189279]: 2025-12-10 20:13:00.509 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 10 20:13:00 compute-0 nova_compute[189279]: 2025-12-10 20:13:00.509 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 10 20:13:00 compute-0 nova_compute[189279]: 2025-12-10 20:13:00.509 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 10 20:13:00 compute-0 nova_compute[189279]: 2025-12-10 20:13:00.510 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.381 189283 INFO nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Creating config drive at /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk.config
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.389 189283 DEBUG oslo_concurrency.processutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw4fbpkk_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:01 compute-0 openstack_network_exporter[205632]: ERROR   20:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:13:01 compute-0 openstack_network_exporter[205632]: ERROR   20:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:13:01 compute-0 openstack_network_exporter[205632]: ERROR   20:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:13:01 compute-0 openstack_network_exporter[205632]: ERROR   20:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:13:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:13:01 compute-0 openstack_network_exporter[205632]: ERROR   20:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:13:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.521 189283 DEBUG oslo_concurrency.processutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw4fbpkk_" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:01 compute-0 kernel: tap42ea5f6d-dd: entered promiscuous mode
Dec 10 20:13:01 compute-0 NetworkManager[56238]: <info>  [1765397581.5751] manager: (tap42ea5f6d-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Dec 10 20:13:01 compute-0 ovn_controller[97701]: 2025-12-10T20:13:01Z|00091|binding|INFO|Claiming lport 42ea5f6d-dd00-4169-8385-3b8709530411 for this chassis.
Dec 10 20:13:01 compute-0 ovn_controller[97701]: 2025-12-10T20:13:01Z|00092|binding|INFO|42ea5f6d-dd00-4169-8385-3b8709530411: Claiming fa:16:3e:cb:c2:44 10.100.0.11
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.574 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.580 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.587 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:c2:44 10.100.0.11'], port_security=['fa:16:3e:cb:c2:44 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '81f60881-4334-4ede-a10d-454a7e8a4154', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92918959-6e40-4a1a-9c11-463c49c96b2f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2505343710a74a61bea5fcb849a4b61b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1fbcc347-f372-4bb1-a6b2-48981642c44d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2a3fe1a-c75e-4977-a15b-b5bec4793c6b, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=42ea5f6d-dd00-4169-8385-3b8709530411) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.588 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 42ea5f6d-dd00-4169-8385-3b8709530411 in datapath 92918959-6e40-4a1a-9c11-463c49c96b2f bound to our chassis
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.591 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92918959-6e40-4a1a-9c11-463c49c96b2f
Dec 10 20:13:01 compute-0 ovn_controller[97701]: 2025-12-10T20:13:01Z|00093|binding|INFO|Setting lport 42ea5f6d-dd00-4169-8385-3b8709530411 ovn-installed in OVS
Dec 10 20:13:01 compute-0 ovn_controller[97701]: 2025-12-10T20:13:01Z|00094|binding|INFO|Setting lport 42ea5f6d-dd00-4169-8385-3b8709530411 up in Southbound
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.599 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.600 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.603 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb6b92e-e6a6-43ee-b4c4-e528d38ab6fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.604 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap92918959-61 in ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.610 239384 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap92918959-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.610 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e673b767-eeef-419e-8dc6-1a01109ead99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.611 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe0adb1-6fb3-4265-a87c-3e2bfd45aacd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 systemd-machined[155642]: New machine qemu-9-instance-00000009.
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.623 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[167e4c10-c87b-4971-814d-038b62ef3394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.648 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e1979d05-06ed-453e-b09f-2d5e51ac0a61]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 systemd-udevd[249308]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.675 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[29ebc775-d3bd-460e-a290-b04716362482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 NetworkManager[56238]: <info>  [1765397581.6815] device (tap42ea5f6d-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 20:13:01 compute-0 NetworkManager[56238]: <info>  [1765397581.6822] device (tap42ea5f6d-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.684 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f8d60a-8cba-489a-9840-521743a09ed4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 NetworkManager[56238]: <info>  [1765397581.6851] manager: (tap92918959-60): new Veth device (/org/freedesktop/NetworkManager/Devices/46)
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.714 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[b722549b-97a0-4bd3-81c3-a0a01ef43581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.717 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[0c49b83f-a031-4030-92ad-f836ea9a1a79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 NetworkManager[56238]: <info>  [1765397581.7438] device (tap92918959-60): carrier: link connected
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.749 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[e93c5f37-6a92-472c-b65d-35680301eda4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.771 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[7323a15d-0d54-43c6-8bad-f5dbe68cfac7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92918959-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:df:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491040, 'reachable_time': 33922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249336, 'error': None, 'target': 'ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.796 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[88f5190f-2665-40a5-a642-f3b57cc0f0b6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:dfa7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491040, 'tstamp': 491040}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249337, 'error': None, 'target': 'ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.817 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[42bf8738-7793-48c3-b219-a49085e75769]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92918959-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:df:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491040, 'reachable_time': 33922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249338, 'error': None, 'target': 'ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.863 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[767ea9e0-836f-4381-8ef3-13347fdd5ad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.870 189283 DEBUG nova.compute.manager [req-c1295493-aab3-4c64-b730-0dc47732d6d9 req-f7fe414e-cad9-45a8-9361-5aa87a53530e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Received event network-vif-plugged-fd5af3d6-f054-4886-9ca7-2888772def6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.871 189283 DEBUG oslo_concurrency.lockutils [req-c1295493-aab3-4c64-b730-0dc47732d6d9 req-f7fe414e-cad9-45a8-9361-5aa87a53530e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.872 189283 DEBUG oslo_concurrency.lockutils [req-c1295493-aab3-4c64-b730-0dc47732d6d9 req-f7fe414e-cad9-45a8-9361-5aa87a53530e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.873 189283 DEBUG oslo_concurrency.lockutils [req-c1295493-aab3-4c64-b730-0dc47732d6d9 req-f7fe414e-cad9-45a8-9361-5aa87a53530e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.873 189283 DEBUG nova.compute.manager [req-c1295493-aab3-4c64-b730-0dc47732d6d9 req-f7fe414e-cad9-45a8-9361-5aa87a53530e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Processing event network-vif-plugged-fd5af3d6-f054-4886-9ca7-2888772def6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.875 189283 DEBUG nova.compute.manager [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.888 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.889 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397581.8875508, 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.890 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] VM Resumed (Lifecycle Event)
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.900 189283 INFO nova.virt.libvirt.driver [-] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Instance spawned successfully.
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.901 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.938 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.948 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.949 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.950 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.951 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.952 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.953 189283 DEBUG nova.virt.libvirt.driver [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.955 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[44db0c24-d153-4d36-958a-cfdea123659c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.956 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92918959-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.957 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.957 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92918959-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:01 compute-0 NetworkManager[56238]: <info>  [1765397581.9612] manager: (tap92918959-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec 10 20:13:01 compute-0 kernel: tap92918959-60: entered promiscuous mode
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.962 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.970 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92918959-60, col_values=(('external_ids', {'iface-id': '86b58a68-ab3c-4f05-ad6f-70a78da6a224'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:01 compute-0 ovn_controller[97701]: 2025-12-10T20:13:01Z|00095|binding|INFO|Releasing lport 86b58a68-ab3c-4f05-ad6f-70a78da6a224 from this chassis (sb_readonly=0)
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.974 106564 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/92918959-6e40-4a1a-9c11-463c49c96b2f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/92918959-6e40-4a1a-9c11-463c49c96b2f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.973 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.975 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[8a371e9f-f6bb-4b06-a0b5-2fbfd17ee5e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.976 106564 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: global
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     log         /dev/log local0 debug
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     log-tag     haproxy-metadata-proxy-92918959-6e40-4a1a-9c11-463c49c96b2f
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     user        root
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     group       root
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     maxconn     1024
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     pidfile     /var/lib/neutron/external/pids/92918959-6e40-4a1a-9c11-463c49c96b2f.pid.haproxy
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     daemon
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: defaults
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     log global
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     mode http
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     option httplog
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     option dontlognull
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     option http-server-close
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     option forwardfor
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     retries                 3
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     timeout http-request    30s
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     timeout connect         30s
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     timeout client          32s
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     timeout server          32s
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     timeout http-keep-alive 30s
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: listen listener
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     bind 169.254.169.254:80
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     server metadata /var/lib/neutron/metadata_proxy
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:     http-request add-header X-OVN-Network-ID 92918959-6e40-4a1a-9c11-463c49c96b2f
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.976 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:13:01 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:01.977 106564 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f', 'env', 'PROCESS_TAG=haproxy-92918959-6e40-4a1a-9c11-463c49c96b2f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/92918959-6e40-4a1a-9c11-463c49c96b2f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 10 20:13:01 compute-0 nova_compute[189279]: 2025-12-10 20:13:01.998 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.001 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.042 189283 INFO nova.compute.manager [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Took 11.20 seconds to spawn the instance on the hypervisor.
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.043 189283 DEBUG nova.compute.manager [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.070 189283 DEBUG nova.compute.manager [req-2bcb731d-e20e-461f-9c85-a6cb8b304723 req-a549e205-8150-428a-9e09-a26725641225 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Received event network-vif-plugged-42ea5f6d-dd00-4169-8385-3b8709530411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.071 189283 DEBUG oslo_concurrency.lockutils [req-2bcb731d-e20e-461f-9c85-a6cb8b304723 req-a549e205-8150-428a-9e09-a26725641225 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "81f60881-4334-4ede-a10d-454a7e8a4154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.071 189283 DEBUG oslo_concurrency.lockutils [req-2bcb731d-e20e-461f-9c85-a6cb8b304723 req-a549e205-8150-428a-9e09-a26725641225 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "81f60881-4334-4ede-a10d-454a7e8a4154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.072 189283 DEBUG oslo_concurrency.lockutils [req-2bcb731d-e20e-461f-9c85-a6cb8b304723 req-a549e205-8150-428a-9e09-a26725641225 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "81f60881-4334-4ede-a10d-454a7e8a4154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.072 189283 DEBUG nova.compute.manager [req-2bcb731d-e20e-461f-9c85-a6cb8b304723 req-a549e205-8150-428a-9e09-a26725641225 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Processing event network-vif-plugged-42ea5f6d-dd00-4169-8385-3b8709530411 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.113 189283 INFO nova.compute.manager [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Took 11.95 seconds to build instance.
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.139 189283 DEBUG oslo_concurrency.lockutils [None req-065ff2da-e8ff-46b5-bf81-d4bc4e90323f eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.215 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397582.2144136, 81f60881-4334-4ede-a10d-454a7e8a4154 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.216 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] VM Started (Lifecycle Event)
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.218 189283 DEBUG nova.compute.manager [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.236 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.245 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.259 189283 INFO nova.virt.libvirt.driver [-] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Instance spawned successfully.
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.260 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.263 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.283 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.284 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397582.2157748, 81f60881-4334-4ede-a10d-454a7e8a4154 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.285 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] VM Paused (Lifecycle Event)
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.292 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.292 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.293 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.294 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.294 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.295 189283 DEBUG nova.virt.libvirt.driver [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.323 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.328 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397582.2303896, 81f60881-4334-4ede-a10d-454a7e8a4154 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.329 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] VM Resumed (Lifecycle Event)
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.357 189283 INFO nova.compute.manager [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Took 9.26 seconds to spawn the instance on the hypervisor.
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.358 189283 DEBUG nova.compute.manager [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.361 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.371 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.397 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:13:02 compute-0 podman[249374]: 2025-12-10 20:13:02.404697074 +0000 UTC m=+0.075697118 container create 6f05fcf79b2785aadec72b3928b040b55100f4cf537ad0722e3368dad9ab2119 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.434 189283 INFO nova.compute.manager [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Took 9.95 seconds to build instance.
Dec 10 20:13:02 compute-0 systemd[1]: Started libpod-conmon-6f05fcf79b2785aadec72b3928b040b55100f4cf537ad0722e3368dad9ab2119.scope.
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.461 189283 DEBUG oslo_concurrency.lockutils [None req-9d654e6d-54f5-4922-b6e7-9795c3b2252c 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "81f60881-4334-4ede-a10d-454a7e8a4154" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:02 compute-0 podman[249374]: 2025-12-10 20:13:02.371128366 +0000 UTC m=+0.042128430 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 10 20:13:02 compute-0 systemd[1]: Started libcrun container.
Dec 10 20:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e5a88394f091679c82615238771c3ec34710baf052ad094bf630954ab9e627/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.503 189283 DEBUG nova.network.neutron [req-321b340b-2ece-44f8-92fc-065f461186ac req-e395bb4a-eb1d-40cc-87b4-b8112b9232cc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Updated VIF entry in instance network info cache for port 42ea5f6d-dd00-4169-8385-3b8709530411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.504 189283 DEBUG nova.network.neutron [req-321b340b-2ece-44f8-92fc-065f461186ac req-e395bb4a-eb1d-40cc-87b4-b8112b9232cc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Updating instance_info_cache with network_info: [{"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:02 compute-0 podman[249374]: 2025-12-10 20:13:02.527320827 +0000 UTC m=+0.198320891 container init 6f05fcf79b2785aadec72b3928b040b55100f4cf537ad0722e3368dad9ab2119 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:13:02 compute-0 podman[249374]: 2025-12-10 20:13:02.535641932 +0000 UTC m=+0.206641986 container start 6f05fcf79b2785aadec72b3928b040b55100f4cf537ad0722e3368dad9ab2119 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:13:02 compute-0 nova_compute[189279]: 2025-12-10 20:13:02.545 189283 DEBUG oslo_concurrency.lockutils [req-321b340b-2ece-44f8-92fc-065f461186ac req-e395bb4a-eb1d-40cc-87b4-b8112b9232cc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:13:02 compute-0 neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f[249388]: [NOTICE]   (249392) : New worker (249394) forked
Dec 10 20:13:02 compute-0 neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f[249388]: [NOTICE]   (249392) : Loading success.
Dec 10 20:13:03 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:03.203 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:03 compute-0 nova_compute[189279]: 2025-12-10 20:13:03.490 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:13:03 compute-0 nova_compute[189279]: 2025-12-10 20:13:03.493 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:13:03 compute-0 ovn_controller[97701]: 2025-12-10T20:13:03Z|00096|binding|INFO|Releasing lport 86b58a68-ab3c-4f05-ad6f-70a78da6a224 from this chassis (sb_readonly=0)
Dec 10 20:13:03 compute-0 ovn_controller[97701]: 2025-12-10T20:13:03Z|00097|binding|INFO|Releasing lport 1a9a5ff2-c47c-4dcb-ad70-b3bd475c5281 from this chassis (sb_readonly=0)
Dec 10 20:13:03 compute-0 ovn_controller[97701]: 2025-12-10T20:13:03Z|00098|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:13:03 compute-0 nova_compute[189279]: 2025-12-10 20:13:03.756 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:03 compute-0 ovn_controller[97701]: 2025-12-10T20:13:03Z|00099|binding|INFO|Releasing lport 86b58a68-ab3c-4f05-ad6f-70a78da6a224 from this chassis (sb_readonly=0)
Dec 10 20:13:03 compute-0 ovn_controller[97701]: 2025-12-10T20:13:03Z|00100|binding|INFO|Releasing lport 1a9a5ff2-c47c-4dcb-ad70-b3bd475c5281 from this chassis (sb_readonly=0)
Dec 10 20:13:03 compute-0 ovn_controller[97701]: 2025-12-10T20:13:03Z|00101|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:13:03 compute-0 nova_compute[189279]: 2025-12-10 20:13:03.977 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.002 189283 DEBUG nova.compute.manager [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Received event network-vif-plugged-fd5af3d6-f054-4886-9ca7-2888772def6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.003 189283 DEBUG oslo_concurrency.lockutils [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.004 189283 DEBUG oslo_concurrency.lockutils [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.005 189283 DEBUG oslo_concurrency.lockutils [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.005 189283 DEBUG nova.compute.manager [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] No waiting events found dispatching network-vif-plugged-fd5af3d6-f054-4886-9ca7-2888772def6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.006 189283 WARNING nova.compute.manager [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Received unexpected event network-vif-plugged-fd5af3d6-f054-4886-9ca7-2888772def6f for instance with vm_state active and task_state None.
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.007 189283 DEBUG nova.compute.manager [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received event network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.008 189283 DEBUG oslo_concurrency.lockutils [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.009 189283 DEBUG oslo_concurrency.lockutils [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.009 189283 DEBUG oslo_concurrency.lockutils [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.010 189283 DEBUG nova.compute.manager [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Processing event network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.011 189283 DEBUG nova.compute.manager [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received event network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.011 189283 DEBUG oslo_concurrency.lockutils [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.012 189283 DEBUG oslo_concurrency.lockutils [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.013 189283 DEBUG oslo_concurrency.lockutils [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.013 189283 DEBUG nova.compute.manager [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] No waiting events found dispatching network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.014 189283 WARNING nova.compute.manager [req-7d30ec6b-90b3-4559-ae7c-bab915f4b2e9 req-ad7a763e-9c77-43c9-8e31-1bfa938ecafc 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received unexpected event network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 for instance with vm_state building and task_state spawning.
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.016 189283 DEBUG nova.compute.manager [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.039 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397584.036048, 63639261-d8d9-46e1-8b3f-55af36a85e58 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.040 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] VM Resumed (Lifecycle Event)
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.043 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.051 189283 INFO nova.virt.libvirt.driver [-] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Instance spawned successfully.
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.051 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.073 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.082 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.088 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.089 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.090 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.090 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.091 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.092 189283 DEBUG nova.virt.libvirt.driver [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.136 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.527 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.528 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.528 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.529 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.539 189283 INFO nova.compute.manager [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Took 13.91 seconds to spawn the instance on the hypervisor.
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.541 189283 DEBUG nova.compute.manager [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.561 189283 DEBUG nova.compute.manager [req-37cd2d3f-e01b-45c9-abaa-262a8136681c req-88ca48f4-00f4-402b-adc1-776f8e1da023 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Received event network-vif-plugged-42ea5f6d-dd00-4169-8385-3b8709530411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.563 189283 DEBUG oslo_concurrency.lockutils [req-37cd2d3f-e01b-45c9-abaa-262a8136681c req-88ca48f4-00f4-402b-adc1-776f8e1da023 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "81f60881-4334-4ede-a10d-454a7e8a4154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.564 189283 DEBUG oslo_concurrency.lockutils [req-37cd2d3f-e01b-45c9-abaa-262a8136681c req-88ca48f4-00f4-402b-adc1-776f8e1da023 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "81f60881-4334-4ede-a10d-454a7e8a4154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.565 189283 DEBUG oslo_concurrency.lockutils [req-37cd2d3f-e01b-45c9-abaa-262a8136681c req-88ca48f4-00f4-402b-adc1-776f8e1da023 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "81f60881-4334-4ede-a10d-454a7e8a4154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.565 189283 DEBUG nova.compute.manager [req-37cd2d3f-e01b-45c9-abaa-262a8136681c req-88ca48f4-00f4-402b-adc1-776f8e1da023 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] No waiting events found dispatching network-vif-plugged-42ea5f6d-dd00-4169-8385-3b8709530411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.566 189283 WARNING nova.compute.manager [req-37cd2d3f-e01b-45c9-abaa-262a8136681c req-88ca48f4-00f4-402b-adc1-776f8e1da023 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Received unexpected event network-vif-plugged-42ea5f6d-dd00-4169-8385-3b8709530411 for instance with vm_state active and task_state None.
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.601 189283 INFO nova.compute.manager [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Took 14.46 seconds to build instance.
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.618 189283 DEBUG oslo_concurrency.lockutils [None req-625262aa-b246-406a-a7a1-9594105489dd 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.662 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.725 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.726 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.786 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.793 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.820 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.851 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.852 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.909 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.917 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.977 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:04 compute-0 nova_compute[189279]: 2025-12-10 20:13:04.978 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.038 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:05 compute-0 NetworkManager[56238]: <info>  [1765397585.3479] manager: (patch-br-int-to-provnet-585b0ffa-caa8-4b2b-92b9-d0501e2b5999): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.348 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:05 compute-0 NetworkManager[56238]: <info>  [1765397585.3506] manager: (patch-provnet-585b0ffa-caa8-4b2b-92b9-d0501e2b5999-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.464 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.466 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5130MB free_disk=72.33207702636719GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.467 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.467 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.484 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:05 compute-0 ovn_controller[97701]: 2025-12-10T20:13:05Z|00102|binding|INFO|Releasing lport 86b58a68-ab3c-4f05-ad6f-70a78da6a224 from this chassis (sb_readonly=0)
Dec 10 20:13:05 compute-0 ovn_controller[97701]: 2025-12-10T20:13:05Z|00103|binding|INFO|Releasing lport 1a9a5ff2-c47c-4dcb-ad70-b3bd475c5281 from this chassis (sb_readonly=0)
Dec 10 20:13:05 compute-0 ovn_controller[97701]: 2025-12-10T20:13:05Z|00104|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.513 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.607 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 63639261-d8d9-46e1-8b3f-55af36a85e58 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.609 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.610 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 81f60881-4334-4ede-a10d-454a7e8a4154 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.611 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.612 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.726 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.748 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.775 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:13:05 compute-0 nova_compute[189279]: 2025-12-10 20:13:05.776 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:06 compute-0 nova_compute[189279]: 2025-12-10 20:13:06.951 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:07 compute-0 podman[249424]: 2025-12-10 20:13:07.133006386 +0000 UTC m=+0.107317792 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 20:13:07 compute-0 podman[249425]: 2025-12-10 20:13:07.202638307 +0000 UTC m=+0.162877393 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64)
Dec 10 20:13:07 compute-0 nova_compute[189279]: 2025-12-10 20:13:07.373 189283 DEBUG nova.compute.manager [req-959c3109-aae0-40b3-8956-a076cbbdf64e req-046663a1-9116-4e99-b825-c5cbc474c3f1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Received event network-changed-42ea5f6d-dd00-4169-8385-3b8709530411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:07 compute-0 nova_compute[189279]: 2025-12-10 20:13:07.375 189283 DEBUG nova.compute.manager [req-959c3109-aae0-40b3-8956-a076cbbdf64e req-046663a1-9116-4e99-b825-c5cbc474c3f1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Refreshing instance network info cache due to event network-changed-42ea5f6d-dd00-4169-8385-3b8709530411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:13:07 compute-0 nova_compute[189279]: 2025-12-10 20:13:07.377 189283 DEBUG oslo_concurrency.lockutils [req-959c3109-aae0-40b3-8956-a076cbbdf64e req-046663a1-9116-4e99-b825-c5cbc474c3f1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:13:07 compute-0 nova_compute[189279]: 2025-12-10 20:13:07.378 189283 DEBUG oslo_concurrency.lockutils [req-959c3109-aae0-40b3-8956-a076cbbdf64e req-046663a1-9116-4e99-b825-c5cbc474c3f1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:13:07 compute-0 nova_compute[189279]: 2025-12-10 20:13:07.379 189283 DEBUG nova.network.neutron [req-959c3109-aae0-40b3-8956-a076cbbdf64e req-046663a1-9116-4e99-b825-c5cbc474c3f1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Refreshing network info cache for port 42ea5f6d-dd00-4169-8385-3b8709530411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:13:07 compute-0 ovn_controller[97701]: 2025-12-10T20:13:07Z|00105|binding|INFO|Releasing lport 86b58a68-ab3c-4f05-ad6f-70a78da6a224 from this chassis (sb_readonly=0)
Dec 10 20:13:07 compute-0 ovn_controller[97701]: 2025-12-10T20:13:07Z|00106|binding|INFO|Releasing lport 1a9a5ff2-c47c-4dcb-ad70-b3bd475c5281 from this chassis (sb_readonly=0)
Dec 10 20:13:07 compute-0 ovn_controller[97701]: 2025-12-10T20:13:07Z|00107|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:13:07 compute-0 nova_compute[189279]: 2025-12-10 20:13:07.443 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:07 compute-0 nova_compute[189279]: 2025-12-10 20:13:07.497 189283 DEBUG nova.compute.manager [req-d52c019f-83ca-45e7-af47-b87791bdecad req-2c5c0b2f-5c41-4955-bc21-35c4e34d4d2f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Received event network-changed-fd5af3d6-f054-4886-9ca7-2888772def6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:07 compute-0 nova_compute[189279]: 2025-12-10 20:13:07.498 189283 DEBUG nova.compute.manager [req-d52c019f-83ca-45e7-af47-b87791bdecad req-2c5c0b2f-5c41-4955-bc21-35c4e34d4d2f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Refreshing instance network info cache due to event network-changed-fd5af3d6-f054-4886-9ca7-2888772def6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:13:07 compute-0 nova_compute[189279]: 2025-12-10 20:13:07.499 189283 DEBUG oslo_concurrency.lockutils [req-d52c019f-83ca-45e7-af47-b87791bdecad req-2c5c0b2f-5c41-4955-bc21-35c4e34d4d2f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:13:07 compute-0 nova_compute[189279]: 2025-12-10 20:13:07.499 189283 DEBUG oslo_concurrency.lockutils [req-d52c019f-83ca-45e7-af47-b87791bdecad req-2c5c0b2f-5c41-4955-bc21-35c4e34d4d2f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:13:07 compute-0 nova_compute[189279]: 2025-12-10 20:13:07.500 189283 DEBUG nova.network.neutron [req-d52c019f-83ca-45e7-af47-b87791bdecad req-2c5c0b2f-5c41-4955-bc21-35c4e34d4d2f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Refreshing network info cache for port fd5af3d6-f054-4886-9ca7-2888772def6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.163 189283 DEBUG oslo_concurrency.lockutils [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Acquiring lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.165 189283 DEBUG oslo_concurrency.lockutils [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.165 189283 DEBUG oslo_concurrency.lockutils [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Acquiring lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.166 189283 DEBUG oslo_concurrency.lockutils [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.166 189283 DEBUG oslo_concurrency.lockutils [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.168 189283 INFO nova.compute.manager [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Terminating instance
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.169 189283 DEBUG nova.compute.manager [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:13:08 compute-0 kernel: tapfd5af3d6-f0 (unregistering): left promiscuous mode
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.209 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "a4a66175-57ff-48da-8473-e93f72da4499" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.210 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a4a66175-57ff-48da-8473-e93f72da4499" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:08 compute-0 NetworkManager[56238]: <info>  [1765397588.2188] device (tapfd5af3d6-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.226 189283 DEBUG nova.compute.manager [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.253 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:08 compute-0 ovn_controller[97701]: 2025-12-10T20:13:08Z|00108|binding|INFO|Releasing lport fd5af3d6-f054-4886-9ca7-2888772def6f from this chassis (sb_readonly=0)
Dec 10 20:13:08 compute-0 ovn_controller[97701]: 2025-12-10T20:13:08Z|00109|binding|INFO|Setting lport fd5af3d6-f054-4886-9ca7-2888772def6f down in Southbound
Dec 10 20:13:08 compute-0 ovn_controller[97701]: 2025-12-10T20:13:08Z|00110|binding|INFO|Removing iface tapfd5af3d6-f0 ovn-installed in OVS
Dec 10 20:13:08 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec 10 20:13:08 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000008.scope: Consumed 6.540s CPU time.
Dec 10 20:13:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:08.265 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:2f:3e 10.100.0.3'], port_security=['fa:16:3e:ae:2f:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '89dd49b4-ab03-4bc5-84ea-a2ae3b040e06', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da989677-bb1a-43bc-bbae-3ccb2693342f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '713d58cceef640c38aa99b2cb5aafd50', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9fb4b7bb-1225-4165-96aa-4dc39a1eec29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6784e28d-40b3-49b1-a2c7-0fca40fd4894, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=fd5af3d6-f054-4886-9ca7-2888772def6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.267 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:08.266 106564 INFO neutron.agent.ovn.metadata.agent [-] Port fd5af3d6-f054-4886-9ca7-2888772def6f in datapath da989677-bb1a-43bc-bbae-3ccb2693342f unbound from our chassis
Dec 10 20:13:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:08.268 106564 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network da989677-bb1a-43bc-bbae-3ccb2693342f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 10 20:13:08 compute-0 systemd-machined[155642]: Machine qemu-7-instance-00000008 terminated.
Dec 10 20:13:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:08.272 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[b34173b5-4684-40f1-bbc4-f161cdb4a1e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:08.272 106564 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f namespace which is not needed anymore
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.278 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.301 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.301 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.314 189283 DEBUG nova.virt.hardware [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.315 189283 INFO nova.compute.claims [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Claim successful on node compute-0.ctlplane.example.com
Dec 10 20:13:08 compute-0 neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f[249124]: [NOTICE]   (249128) : haproxy version is 2.8.14-c23fe91
Dec 10 20:13:08 compute-0 neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f[249124]: [NOTICE]   (249128) : path to executable is /usr/sbin/haproxy
Dec 10 20:13:08 compute-0 neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f[249124]: [WARNING]  (249128) : Exiting Master process...
Dec 10 20:13:08 compute-0 neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f[249124]: [ALERT]    (249128) : Current worker (249130) exited with code 143 (Terminated)
Dec 10 20:13:08 compute-0 neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f[249124]: [WARNING]  (249128) : All workers exited. Exiting... (0)
Dec 10 20:13:08 compute-0 systemd[1]: libpod-f3f16e858be062a1b2daef2d20e6b59aaaece526001dd9bd47dd26b9dea47758.scope: Deactivated successfully.
Dec 10 20:13:08 compute-0 podman[249487]: 2025-12-10 20:13:08.48096932 +0000 UTC m=+0.082343226 container died f3f16e858be062a1b2daef2d20e6b59aaaece526001dd9bd47dd26b9dea47758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.491 189283 INFO nova.virt.libvirt.driver [-] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Instance destroyed successfully.
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.491 189283 DEBUG nova.objects.instance [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lazy-loading 'resources' on Instance uuid 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.508 189283 DEBUG nova.virt.libvirt.vif [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:12:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-250151050',display_name='tempest-ServersTestManualDisk-server-250151050',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-250151050',id=8,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTnhRVtOkaEB43hj3b9xkLF/AS5xBqt91JJz2md3hTIC1ctHaB2qLQgFSk1Zu6ZyPqHY7WWH8JPI6LRwH7YTWJ/DZ4DmtLklE1lfKyxzq1OGuzJ+13jtKao+VNcvaCVzA==',key_name='tempest-keypair-1590776143',keypairs=<?>,launch_index=0,launched_at=2025-12-10T20:13:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='713d58cceef640c38aa99b2cb5aafd50',ramdisk_id='',reservation_id='r-21nhx334',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-301505012',owner_user_name='tempest-ServersTestManualDisk-301505012-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T20:13:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb4b85bd92294252be8009eb039aa323',uuid=89dd49b4-ab03-4bc5-84ea-a2ae3b040e06,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd5af3d6-f054-4886-9ca7-2888772def6f", "address": "fa:16:3e:ae:2f:3e", "network": {"id": "da989677-bb1a-43bc-bbae-3ccb2693342f", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-952491667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "713d58cceef640c38aa99b2cb5aafd50", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd5af3d6-f0", "ovs_interfaceid": "fd5af3d6-f054-4886-9ca7-2888772def6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.509 189283 DEBUG nova.network.os_vif_util [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Converting VIF {"id": "fd5af3d6-f054-4886-9ca7-2888772def6f", "address": "fa:16:3e:ae:2f:3e", "network": {"id": "da989677-bb1a-43bc-bbae-3ccb2693342f", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-952491667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "713d58cceef640c38aa99b2cb5aafd50", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd5af3d6-f0", "ovs_interfaceid": "fd5af3d6-f054-4886-9ca7-2888772def6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.509 189283 DEBUG nova.network.os_vif_util [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:2f:3e,bridge_name='br-int',has_traffic_filtering=True,id=fd5af3d6-f054-4886-9ca7-2888772def6f,network=Network(da989677-bb1a-43bc-bbae-3ccb2693342f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd5af3d6-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.510 189283 DEBUG os_vif [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:2f:3e,bridge_name='br-int',has_traffic_filtering=True,id=fd5af3d6-f054-4886-9ca7-2888772def6f,network=Network(da989677-bb1a-43bc-bbae-3ccb2693342f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd5af3d6-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.512 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.513 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd5af3d6-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.514 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.516 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.520 189283 INFO os_vif [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:2f:3e,bridge_name='br-int',has_traffic_filtering=True,id=fd5af3d6-f054-4886-9ca7-2888772def6f,network=Network(da989677-bb1a-43bc-bbae-3ccb2693342f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd5af3d6-f0')
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.521 189283 INFO nova.virt.libvirt.driver [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Deleting instance files /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06_del
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.522 189283 INFO nova.virt.libvirt.driver [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Deletion of /var/lib/nova/instances/89dd49b4-ab03-4bc5-84ea-a2ae3b040e06_del complete
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.527 189283 DEBUG nova.compute.provider_tree [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:13:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3f16e858be062a1b2daef2d20e6b59aaaece526001dd9bd47dd26b9dea47758-userdata-shm.mount: Deactivated successfully.
Dec 10 20:13:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-265f3160df4d4b73600bee6d2b1af55ee6f15d2ea2888b31c18f395969555510-merged.mount: Deactivated successfully.
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.543 189283 DEBUG nova.scheduler.client.report [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:13:08 compute-0 podman[249487]: 2025-12-10 20:13:08.551527577 +0000 UTC m=+0.152901493 container cleanup f3f16e858be062a1b2daef2d20e6b59aaaece526001dd9bd47dd26b9dea47758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 10 20:13:08 compute-0 systemd[1]: libpod-conmon-f3f16e858be062a1b2daef2d20e6b59aaaece526001dd9bd47dd26b9dea47758.scope: Deactivated successfully.
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.587 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.588 189283 DEBUG nova.compute.manager [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.598 189283 INFO nova.compute.manager [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Took 0.43 seconds to destroy the instance on the hypervisor.
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.599 189283 DEBUG oslo.service.loopingcall [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.600 189283 DEBUG nova.compute.manager [-] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.600 189283 DEBUG nova.network.neutron [-] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.632 189283 DEBUG nova.compute.manager [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.633 189283 DEBUG nova.network.neutron [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 20:13:08 compute-0 podman[249534]: 2025-12-10 20:13:08.644916051 +0000 UTC m=+0.055078149 container remove f3f16e858be062a1b2daef2d20e6b59aaaece526001dd9bd47dd26b9dea47758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 10 20:13:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:08.654 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[fa94a25f-5777-4a26-b527-6c5d3f9b7a36]: (4, ('Wed Dec 10 08:13:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f (f3f16e858be062a1b2daef2d20e6b59aaaece526001dd9bd47dd26b9dea47758)\nf3f16e858be062a1b2daef2d20e6b59aaaece526001dd9bd47dd26b9dea47758\nWed Dec 10 08:13:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f (f3f16e858be062a1b2daef2d20e6b59aaaece526001dd9bd47dd26b9dea47758)\nf3f16e858be062a1b2daef2d20e6b59aaaece526001dd9bd47dd26b9dea47758\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:08.656 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[70f2c824-9f7f-4a5b-aabf-e09478d4941f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:08.658 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda989677-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:08 compute-0 kernel: tapda989677-b0: left promiscuous mode
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.660 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:08.668 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[60afff62-4be3-48ca-8e55-fea0d50195c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.680 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:08.691 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[421dfcc6-a040-4309-ab73-3341e04fb52d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:08.694 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[954a8761-c150-4361-a408-e67f48c73727]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:08.717 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e386b375-a633-47df-b936-936cba90fad1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490505, 'reachable_time': 15560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249549, 'error': None, 'target': 'ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:08 compute-0 systemd[1]: run-netns-ovnmeta\x2dda989677\x2dbb1a\x2d43bc\x2dbbae\x2d3ccb2693342f.mount: Deactivated successfully.
Dec 10 20:13:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:08.722 106676 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-da989677-bb1a-43bc-bbae-3ccb2693342f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 10 20:13:08 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:08.722 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[5b3ff500-443a-4711-a196-091083939031]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.841 189283 INFO nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.869 189283 DEBUG nova.compute.manager [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.954 189283 DEBUG nova.compute.manager [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.955 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.956 189283 INFO nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Creating image(s)
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.956 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "/var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.957 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "/var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.958 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "/var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:08 compute-0 nova_compute[189279]: 2025-12-10 20:13:08.971 189283 DEBUG oslo_concurrency.processutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.031 189283 DEBUG oslo_concurrency.processutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.033 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.033 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.044 189283 DEBUG oslo_concurrency.processutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.101 189283 DEBUG oslo_concurrency.processutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.103 189283 DEBUG oslo_concurrency.processutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.148 189283 DEBUG oslo_concurrency.processutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.149 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.150 189283 DEBUG oslo_concurrency.processutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.209 189283 DEBUG oslo_concurrency.processutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.211 189283 DEBUG nova.virt.disk.api [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Checking if we can resize image /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.211 189283 DEBUG oslo_concurrency.processutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.303 189283 DEBUG oslo_concurrency.processutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.305 189283 DEBUG nova.virt.disk.api [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Cannot resize image /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.305 189283 DEBUG nova.objects.instance [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lazy-loading 'migration_context' on Instance uuid a4a66175-57ff-48da-8473-e93f72da4499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.322 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.323 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Ensure instance console log exists: /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.323 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.324 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.325 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.435 189283 DEBUG nova.policy [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '598a18069aae495194ab1b43958530aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a51cea6d1cb40c383b87a400100e902', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.525 189283 DEBUG nova.compute.manager [req-98342fc9-e7b5-45af-ae5e-3f1087df8d34 req-10e4b002-25fc-4b42-9902-f44f13cd0e43 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Received event network-vif-unplugged-fd5af3d6-f054-4886-9ca7-2888772def6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.526 189283 DEBUG oslo_concurrency.lockutils [req-98342fc9-e7b5-45af-ae5e-3f1087df8d34 req-10e4b002-25fc-4b42-9902-f44f13cd0e43 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.526 189283 DEBUG oslo_concurrency.lockutils [req-98342fc9-e7b5-45af-ae5e-3f1087df8d34 req-10e4b002-25fc-4b42-9902-f44f13cd0e43 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.527 189283 DEBUG oslo_concurrency.lockutils [req-98342fc9-e7b5-45af-ae5e-3f1087df8d34 req-10e4b002-25fc-4b42-9902-f44f13cd0e43 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.527 189283 DEBUG nova.compute.manager [req-98342fc9-e7b5-45af-ae5e-3f1087df8d34 req-10e4b002-25fc-4b42-9902-f44f13cd0e43 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] No waiting events found dispatching network-vif-unplugged-fd5af3d6-f054-4886-9ca7-2888772def6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.528 189283 DEBUG nova.compute.manager [req-98342fc9-e7b5-45af-ae5e-3f1087df8d34 req-10e4b002-25fc-4b42-9902-f44f13cd0e43 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Received event network-vif-unplugged-fd5af3d6-f054-4886-9ca7-2888772def6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.891 189283 DEBUG nova.compute.manager [req-4b91bf76-4e10-474f-90db-b91fa5669f01 req-4e320672-f2f9-47ef-b47a-522dff44a50c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received event network-changed-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.892 189283 DEBUG nova.compute.manager [req-4b91bf76-4e10-474f-90db-b91fa5669f01 req-4e320672-f2f9-47ef-b47a-522dff44a50c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Refreshing instance network info cache due to event network-changed-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.893 189283 DEBUG oslo_concurrency.lockutils [req-4b91bf76-4e10-474f-90db-b91fa5669f01 req-4e320672-f2f9-47ef-b47a-522dff44a50c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.893 189283 DEBUG oslo_concurrency.lockutils [req-4b91bf76-4e10-474f-90db-b91fa5669f01 req-4e320672-f2f9-47ef-b47a-522dff44a50c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:13:09 compute-0 nova_compute[189279]: 2025-12-10 20:13:09.894 189283 DEBUG nova.network.neutron [req-4b91bf76-4e10-474f-90db-b91fa5669f01 req-4e320672-f2f9-47ef-b47a-522dff44a50c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Refreshing network info cache for port a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:13:10 compute-0 nova_compute[189279]: 2025-12-10 20:13:10.082 189283 DEBUG nova.network.neutron [req-959c3109-aae0-40b3-8956-a076cbbdf64e req-046663a1-9116-4e99-b825-c5cbc474c3f1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Updated VIF entry in instance network info cache for port 42ea5f6d-dd00-4169-8385-3b8709530411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:13:10 compute-0 nova_compute[189279]: 2025-12-10 20:13:10.084 189283 DEBUG nova.network.neutron [req-959c3109-aae0-40b3-8956-a076cbbdf64e req-046663a1-9116-4e99-b825-c5cbc474c3f1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Updating instance_info_cache with network_info: [{"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:10 compute-0 nova_compute[189279]: 2025-12-10 20:13:10.105 189283 DEBUG oslo_concurrency.lockutils [req-959c3109-aae0-40b3-8956-a076cbbdf64e req-046663a1-9116-4e99-b825-c5cbc474c3f1 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:13:10 compute-0 nova_compute[189279]: 2025-12-10 20:13:10.773 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:13:10 compute-0 nova_compute[189279]: 2025-12-10 20:13:10.810 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:13:10 compute-0 nova_compute[189279]: 2025-12-10 20:13:10.967 189283 DEBUG nova.network.neutron [req-d52c019f-83ca-45e7-af47-b87791bdecad req-2c5c0b2f-5c41-4955-bc21-35c4e34d4d2f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Updated VIF entry in instance network info cache for port fd5af3d6-f054-4886-9ca7-2888772def6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:13:10 compute-0 nova_compute[189279]: 2025-12-10 20:13:10.969 189283 DEBUG nova.network.neutron [req-d52c019f-83ca-45e7-af47-b87791bdecad req-2c5c0b2f-5c41-4955-bc21-35c4e34d4d2f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Updating instance_info_cache with network_info: [{"id": "fd5af3d6-f054-4886-9ca7-2888772def6f", "address": "fa:16:3e:ae:2f:3e", "network": {"id": "da989677-bb1a-43bc-bbae-3ccb2693342f", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-952491667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "713d58cceef640c38aa99b2cb5aafd50", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd5af3d6-f0", "ovs_interfaceid": "fd5af3d6-f054-4886-9ca7-2888772def6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:10 compute-0 nova_compute[189279]: 2025-12-10 20:13:10.996 189283 DEBUG oslo_concurrency.lockutils [req-d52c019f-83ca-45e7-af47-b87791bdecad req-2c5c0b2f-5c41-4955-bc21-35c4e34d4d2f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.370 189283 DEBUG nova.network.neutron [-] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.392 189283 INFO nova.compute.manager [-] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Took 2.79 seconds to deallocate network for instance.
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.444 189283 DEBUG oslo_concurrency.lockutils [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.445 189283 DEBUG oslo_concurrency.lockutils [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.605 189283 DEBUG nova.compute.provider_tree [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.627 189283 DEBUG nova.scheduler.client.report [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.651 189283 DEBUG oslo_concurrency.lockutils [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.675 189283 INFO nova.scheduler.client.report [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Deleted allocations for instance 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.775 189283 DEBUG oslo_concurrency.lockutils [None req-33102764-fc53-4233-b3fd-7adf21836170 eb4b85bd92294252be8009eb039aa323 713d58cceef640c38aa99b2cb5aafd50 - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.863 189283 DEBUG nova.compute.manager [req-b6145404-3977-4c6f-a689-0591b09fb6b7 req-e3040ee1-2b33-4d7d-aadd-cb664ed79653 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Received event network-vif-plugged-fd5af3d6-f054-4886-9ca7-2888772def6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.865 189283 DEBUG oslo_concurrency.lockutils [req-b6145404-3977-4c6f-a689-0591b09fb6b7 req-e3040ee1-2b33-4d7d-aadd-cb664ed79653 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.866 189283 DEBUG oslo_concurrency.lockutils [req-b6145404-3977-4c6f-a689-0591b09fb6b7 req-e3040ee1-2b33-4d7d-aadd-cb664ed79653 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.867 189283 DEBUG oslo_concurrency.lockutils [req-b6145404-3977-4c6f-a689-0591b09fb6b7 req-e3040ee1-2b33-4d7d-aadd-cb664ed79653 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "89dd49b4-ab03-4bc5-84ea-a2ae3b040e06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.868 189283 DEBUG nova.compute.manager [req-b6145404-3977-4c6f-a689-0591b09fb6b7 req-e3040ee1-2b33-4d7d-aadd-cb664ed79653 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] No waiting events found dispatching network-vif-plugged-fd5af3d6-f054-4886-9ca7-2888772def6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.869 189283 WARNING nova.compute.manager [req-b6145404-3977-4c6f-a689-0591b09fb6b7 req-e3040ee1-2b33-4d7d-aadd-cb664ed79653 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Received unexpected event network-vif-plugged-fd5af3d6-f054-4886-9ca7-2888772def6f for instance with vm_state deleted and task_state None.
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.870 189283 DEBUG nova.compute.manager [req-b6145404-3977-4c6f-a689-0591b09fb6b7 req-e3040ee1-2b33-4d7d-aadd-cb664ed79653 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Received event network-vif-deleted-fd5af3d6-f054-4886-9ca7-2888772def6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:11 compute-0 nova_compute[189279]: 2025-12-10 20:13:11.953 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:12 compute-0 nova_compute[189279]: 2025-12-10 20:13:12.219 189283 DEBUG nova.network.neutron [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Successfully created port: 3ae03bc4-7221-4da1-8e97-1a1ea168ac84 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 10 20:13:13 compute-0 nova_compute[189279]: 2025-12-10 20:13:13.515 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:14 compute-0 nova_compute[189279]: 2025-12-10 20:13:14.092 189283 DEBUG nova.network.neutron [req-4b91bf76-4e10-474f-90db-b91fa5669f01 req-4e320672-f2f9-47ef-b47a-522dff44a50c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Updated VIF entry in instance network info cache for port a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:13:14 compute-0 nova_compute[189279]: 2025-12-10 20:13:14.094 189283 DEBUG nova.network.neutron [req-4b91bf76-4e10-474f-90db-b91fa5669f01 req-4e320672-f2f9-47ef-b47a-522dff44a50c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Updating instance_info_cache with network_info: [{"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:14 compute-0 nova_compute[189279]: 2025-12-10 20:13:14.116 189283 DEBUG oslo_concurrency.lockutils [req-4b91bf76-4e10-474f-90db-b91fa5669f01 req-4e320672-f2f9-47ef-b47a-522dff44a50c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:13:14 compute-0 nova_compute[189279]: 2025-12-10 20:13:14.167 189283 DEBUG nova.network.neutron [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Successfully updated port: 3ae03bc4-7221-4da1-8e97-1a1ea168ac84 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 20:13:14 compute-0 nova_compute[189279]: 2025-12-10 20:13:14.183 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:13:14 compute-0 nova_compute[189279]: 2025-12-10 20:13:14.184 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquired lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:13:14 compute-0 nova_compute[189279]: 2025-12-10 20:13:14.185 189283 DEBUG nova.network.neutron [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:13:14 compute-0 nova_compute[189279]: 2025-12-10 20:13:14.259 189283 DEBUG nova.compute.manager [req-f08483e5-93fd-4c6c-8cdb-d5d5356ec995 req-8ddd7f19-731b-45d7-abfb-16375fd0106a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Received event network-changed-3ae03bc4-7221-4da1-8e97-1a1ea168ac84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:14 compute-0 nova_compute[189279]: 2025-12-10 20:13:14.260 189283 DEBUG nova.compute.manager [req-f08483e5-93fd-4c6c-8cdb-d5d5356ec995 req-8ddd7f19-731b-45d7-abfb-16375fd0106a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Refreshing instance network info cache due to event network-changed-3ae03bc4-7221-4da1-8e97-1a1ea168ac84. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:13:14 compute-0 nova_compute[189279]: 2025-12-10 20:13:14.261 189283 DEBUG oslo_concurrency.lockutils [req-f08483e5-93fd-4c6c-8cdb-d5d5356ec995 req-8ddd7f19-731b-45d7-abfb-16375fd0106a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:13:14 compute-0 nova_compute[189279]: 2025-12-10 20:13:14.629 189283 DEBUG nova.network.neutron [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 20:13:15 compute-0 podman[249565]: 2025-12-10 20:13:15.11200489 +0000 UTC m=+0.088738229 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 10 20:13:15 compute-0 podman[249567]: 2025-12-10 20:13:15.128428844 +0000 UTC m=+0.094912387 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, name=ubi9, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 10 20:13:15 compute-0 podman[249566]: 2025-12-10 20:13:15.133059429 +0000 UTC m=+0.104717691 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.010 189283 DEBUG nova.network.neutron [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Updating instance_info_cache with network_info: [{"id": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "address": "fa:16:3e:a8:ab:64", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae03bc4-72", "ovs_interfaceid": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.035 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Releasing lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.037 189283 DEBUG nova.compute.manager [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Instance network_info: |[{"id": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "address": "fa:16:3e:a8:ab:64", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae03bc4-72", "ovs_interfaceid": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.038 189283 DEBUG oslo_concurrency.lockutils [req-f08483e5-93fd-4c6c-8cdb-d5d5356ec995 req-8ddd7f19-731b-45d7-abfb-16375fd0106a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.039 189283 DEBUG nova.network.neutron [req-f08483e5-93fd-4c6c-8cdb-d5d5356ec995 req-8ddd7f19-731b-45d7-abfb-16375fd0106a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Refreshing network info cache for port 3ae03bc4-7221-4da1-8e97-1a1ea168ac84 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.044 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Start _get_guest_xml network_info=[{"id": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "address": "fa:16:3e:a8:ab:64", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae03bc4-72", "ovs_interfaceid": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '33b11153-486b-4d32-bc63-6b6a6ed0b704'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.056 189283 WARNING nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.067 189283 DEBUG nova.virt.libvirt.host [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.069 189283 DEBUG nova.virt.libvirt.host [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.076 189283 DEBUG nova.virt.libvirt.host [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.078 189283 DEBUG nova.virt.libvirt.host [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.079 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.080 189283 DEBUG nova.virt.hardware [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T20:11:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.081 189283 DEBUG nova.virt.hardware [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.082 189283 DEBUG nova.virt.hardware [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.083 189283 DEBUG nova.virt.hardware [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.084 189283 DEBUG nova.virt.hardware [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.085 189283 DEBUG nova.virt.hardware [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.086 189283 DEBUG nova.virt.hardware [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.087 189283 DEBUG nova.virt.hardware [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.088 189283 DEBUG nova.virt.hardware [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.089 189283 DEBUG nova.virt.hardware [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.090 189283 DEBUG nova.virt.hardware [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.104 189283 DEBUG nova.virt.libvirt.vif [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:13:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430019440',display_name='tempest-TestNetworkBasicOps-server-1430019440',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430019440',id=10,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB4E+QQbCnMKR4Bqdjha+rs4A0/JyNIyai0SC4OFeCF3EnGfKMIqFc/YZBttl6lpjVQTEtQAwCW4j1L5i/kG3kkf68MHHviiDU+MYShWguHMhoAFUF8RQ+bl7fw8EmQuPQ==',key_name='tempest-TestNetworkBasicOps-103146956',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a51cea6d1cb40c383b87a400100e902',ramdisk_id='',reservation_id='r-fzq7r9os',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1301966146',owner_user_name='tempest-TestNetworkBasicOps-1301966146-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:13:08Z,user_data=None,user_id='598a18069aae495194ab1b43958530aa',uuid=a4a66175-57ff-48da-8473-e93f72da4499,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "address": "fa:16:3e:a8:ab:64", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae03bc4-72", "ovs_interfaceid": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.106 189283 DEBUG nova.network.os_vif_util [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Converting VIF {"id": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "address": "fa:16:3e:a8:ab:64", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae03bc4-72", "ovs_interfaceid": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.107 189283 DEBUG nova.network.os_vif_util [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:ab:64,bridge_name='br-int',has_traffic_filtering=True,id=3ae03bc4-7221-4da1-8e97-1a1ea168ac84,network=Network(4388b363-773a-4716-8c7d-00d02392bfdb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ae03bc4-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.109 189283 DEBUG nova.objects.instance [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lazy-loading 'pci_devices' on Instance uuid a4a66175-57ff-48da-8473-e93f72da4499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.126 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] End _get_guest_xml xml=<domain type="kvm">
Dec 10 20:13:16 compute-0 nova_compute[189279]:   <uuid>a4a66175-57ff-48da-8473-e93f72da4499</uuid>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   <name>instance-0000000a</name>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   <memory>131072</memory>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   <metadata>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <nova:name>tempest-TestNetworkBasicOps-server-1430019440</nova:name>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 20:13:16</nova:creationTime>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <nova:flavor name="m1.nano">
Dec 10 20:13:16 compute-0 nova_compute[189279]:         <nova:memory>128</nova:memory>
Dec 10 20:13:16 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 20:13:16 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 20:13:16 compute-0 nova_compute[189279]:         <nova:ephemeral>0</nova:ephemeral>
Dec 10 20:13:16 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 20:13:16 compute-0 nova_compute[189279]:         <nova:user uuid="598a18069aae495194ab1b43958530aa">tempest-TestNetworkBasicOps-1301966146-project-member</nova:user>
Dec 10 20:13:16 compute-0 nova_compute[189279]:         <nova:project uuid="8a51cea6d1cb40c383b87a400100e902">tempest-TestNetworkBasicOps-1301966146</nova:project>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="33b11153-486b-4d32-bc63-6b6a6ed0b704"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 20:13:16 compute-0 nova_compute[189279]:         <nova:port uuid="3ae03bc4-7221-4da1-8e97-1a1ea168ac84">
Dec 10 20:13:16 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   </metadata>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <system>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <entry name="serial">a4a66175-57ff-48da-8473-e93f72da4499</entry>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <entry name="uuid">a4a66175-57ff-48da-8473-e93f72da4499</entry>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     </system>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   <os>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   </os>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   <features>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <apic/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   </features>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   </clock>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   </cpu>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   <devices>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk.config"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:a8:ab:64"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <target dev="tap3ae03bc4-72"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     </interface>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/console.log" append="off"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     </serial>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <video>
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     </video>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     </rng>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 20:13:16 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 20:13:16 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 20:13:16 compute-0 nova_compute[189279]:   </devices>
Dec 10 20:13:16 compute-0 nova_compute[189279]: </domain>
Dec 10 20:13:16 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.142 189283 DEBUG nova.compute.manager [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Preparing to wait for external event network-vif-plugged-3ae03bc4-7221-4da1-8e97-1a1ea168ac84 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.142 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "a4a66175-57ff-48da-8473-e93f72da4499-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.142 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a4a66175-57ff-48da-8473-e93f72da4499-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.143 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a4a66175-57ff-48da-8473-e93f72da4499-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.143 189283 DEBUG nova.virt.libvirt.vif [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:13:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430019440',display_name='tempest-TestNetworkBasicOps-server-1430019440',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430019440',id=10,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB4E+QQbCnMKR4Bqdjha+rs4A0/JyNIyai0SC4OFeCF3EnGfKMIqFc/YZBttl6lpjVQTEtQAwCW4j1L5i/kG3kkf68MHHviiDU+MYShWguHMhoAFUF8RQ+bl7fw8EmQuPQ==',key_name='tempest-TestNetworkBasicOps-103146956',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a51cea6d1cb40c383b87a400100e902',ramdisk_id='',reservation_id='r-fzq7r9os',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1301966146',owner_user_name='tempest-TestNetworkBasicOps-1301966146-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:13:08Z,user_data=None,user_id='598a18069aae495194ab1b43958530aa',uuid=a4a66175-57ff-48da-8473-e93f72da4499,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "address": "fa:16:3e:a8:ab:64", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae03bc4-72", "ovs_interfaceid": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.144 189283 DEBUG nova.network.os_vif_util [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Converting VIF {"id": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "address": "fa:16:3e:a8:ab:64", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae03bc4-72", "ovs_interfaceid": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.144 189283 DEBUG nova.network.os_vif_util [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:ab:64,bridge_name='br-int',has_traffic_filtering=True,id=3ae03bc4-7221-4da1-8e97-1a1ea168ac84,network=Network(4388b363-773a-4716-8c7d-00d02392bfdb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ae03bc4-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.145 189283 DEBUG os_vif [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:ab:64,bridge_name='br-int',has_traffic_filtering=True,id=3ae03bc4-7221-4da1-8e97-1a1ea168ac84,network=Network(4388b363-773a-4716-8c7d-00d02392bfdb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ae03bc4-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.147 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.148 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.148 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.153 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.153 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ae03bc4-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.154 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3ae03bc4-72, col_values=(('external_ids', {'iface-id': '3ae03bc4-7221-4da1-8e97-1a1ea168ac84', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:ab:64', 'vm-uuid': 'a4a66175-57ff-48da-8473-e93f72da4499'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:16 compute-0 NetworkManager[56238]: <info>  [1765397596.1588] manager: (tap3ae03bc4-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.159 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.163 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.166 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.168 189283 INFO os_vif [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:ab:64,bridge_name='br-int',has_traffic_filtering=True,id=3ae03bc4-7221-4da1-8e97-1a1ea168ac84,network=Network(4388b363-773a-4716-8c7d-00d02392bfdb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ae03bc4-72')
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.227 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.228 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.229 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] No VIF found with MAC fa:16:3e:a8:ab:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.230 189283 INFO nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Using config drive
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.635 189283 INFO nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Creating config drive at /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk.config
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.648 189283 DEBUG oslo_concurrency.processutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyhd1ky69 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.786 189283 DEBUG oslo_concurrency.processutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyhd1ky69" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:16 compute-0 kernel: tap3ae03bc4-72: entered promiscuous mode
Dec 10 20:13:16 compute-0 NetworkManager[56238]: <info>  [1765397596.8937] manager: (tap3ae03bc4-72): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Dec 10 20:13:16 compute-0 ovn_controller[97701]: 2025-12-10T20:13:16Z|00111|binding|INFO|Claiming lport 3ae03bc4-7221-4da1-8e97-1a1ea168ac84 for this chassis.
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.893 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:16 compute-0 ovn_controller[97701]: 2025-12-10T20:13:16Z|00112|binding|INFO|3ae03bc4-7221-4da1-8e97-1a1ea168ac84: Claiming fa:16:3e:a8:ab:64 10.100.0.14
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.903 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:16.920 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:ab:64 10.100.0.14'], port_security=['fa:16:3e:a8:ab:64 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a4a66175-57ff-48da-8473-e93f72da4499', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4388b363-773a-4716-8c7d-00d02392bfdb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a51cea6d1cb40c383b87a400100e902', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e2eba8bb-e846-494e-a7a9-776afed9b12b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e9ca3af-f428-458c-a5cc-cfb31b816028, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=3ae03bc4-7221-4da1-8e97-1a1ea168ac84) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:13:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:16.921 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 3ae03bc4-7221-4da1-8e97-1a1ea168ac84 in datapath 4388b363-773a-4716-8c7d-00d02392bfdb bound to our chassis
Dec 10 20:13:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:16.923 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4388b363-773a-4716-8c7d-00d02392bfdb
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.926 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:16 compute-0 ovn_controller[97701]: 2025-12-10T20:13:16Z|00113|binding|INFO|Setting lport 3ae03bc4-7221-4da1-8e97-1a1ea168ac84 ovn-installed in OVS
Dec 10 20:13:16 compute-0 ovn_controller[97701]: 2025-12-10T20:13:16Z|00114|binding|INFO|Setting lport 3ae03bc4-7221-4da1-8e97-1a1ea168ac84 up in Southbound
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.931 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:16.954 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f3cbc4-9fa7-4a9d-aec1-60e6bca9dd14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:16.956 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4388b363-71 in ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 10 20:13:16 compute-0 nova_compute[189279]: 2025-12-10 20:13:16.957 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:16.960 239384 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4388b363-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 10 20:13:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:16.960 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f6d7cf-509b-4d55-8022-a8fa300bf174]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:16.962 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[99f44c39-74cb-40b9-b5d6-885665af10b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:16 compute-0 systemd-machined[155642]: New machine qemu-10-instance-0000000a.
Dec 10 20:13:16 compute-0 systemd-udevd[249642]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:13:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:16.985 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[90c77229-1976-497a-9fff-bbd76897bf4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:16 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Dec 10 20:13:16 compute-0 NetworkManager[56238]: <info>  [1765397596.9969] device (tap3ae03bc4-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 20:13:16 compute-0 NetworkManager[56238]: <info>  [1765397596.9996] device (tap3ae03bc4-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.017 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e58d70-2149-4432-b908-524c19d1b125]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.054 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[37d589ba-ed8b-4e98-a13e-2bfc9ec9b1b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:17 compute-0 NetworkManager[56238]: <info>  [1765397597.0641] manager: (tap4388b363-70): new Veth device (/org/freedesktop/NetworkManager/Devices/52)
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.063 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[1b768daf-616f-42a6-b33d-be67974b278e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.119 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[30e95593-54f1-4081-8d75-c4c5174d82dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.124 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[34e5acec-ddcc-49e9-a71a-83c8896e6e89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:17 compute-0 NetworkManager[56238]: <info>  [1765397597.1671] device (tap4388b363-70): carrier: link connected
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.177 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[454e7af1-8f96-40b1-824f-2608343c0d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.206 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[ea80c9b8-9a9a-4a45-b98e-309dad9d27f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4388b363-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492582, 'reachable_time': 15439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249673, 'error': None, 'target': 'ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.234 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[6746afbd-fcb0-49cb-8b18-cbcfc6d7ce6a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:eb7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 492582, 'tstamp': 492582}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249674, 'error': None, 'target': 'ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.259 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[6d97176c-0a10-47d9-8dc9-46fc32791e31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4388b363-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492582, 'reachable_time': 15439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249675, 'error': None, 'target': 'ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.318 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3eb3fc-1463-4a92-8bd1-b6e72b6c746f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.404 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ceb511-5bac-4fb3-8407-2cc434e98c7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.406 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4388b363-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.406 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.406 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4388b363-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:17 compute-0 nova_compute[189279]: 2025-12-10 20:13:17.406 189283 DEBUG nova.compute.manager [req-986a8ec4-b960-4cd5-9566-fc651a4829de req-e0c9ef35-f748-4cd9-98ea-a4d7df2bd626 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Received event network-vif-plugged-3ae03bc4-7221-4da1-8e97-1a1ea168ac84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:17 compute-0 nova_compute[189279]: 2025-12-10 20:13:17.408 189283 DEBUG oslo_concurrency.lockutils [req-986a8ec4-b960-4cd5-9566-fc651a4829de req-e0c9ef35-f748-4cd9-98ea-a4d7df2bd626 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "a4a66175-57ff-48da-8473-e93f72da4499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:17 compute-0 nova_compute[189279]: 2025-12-10 20:13:17.409 189283 DEBUG oslo_concurrency.lockutils [req-986a8ec4-b960-4cd5-9566-fc651a4829de req-e0c9ef35-f748-4cd9-98ea-a4d7df2bd626 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "a4a66175-57ff-48da-8473-e93f72da4499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:17 compute-0 nova_compute[189279]: 2025-12-10 20:13:17.409 189283 DEBUG oslo_concurrency.lockutils [req-986a8ec4-b960-4cd5-9566-fc651a4829de req-e0c9ef35-f748-4cd9-98ea-a4d7df2bd626 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "a4a66175-57ff-48da-8473-e93f72da4499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:17 compute-0 nova_compute[189279]: 2025-12-10 20:13:17.410 189283 DEBUG nova.compute.manager [req-986a8ec4-b960-4cd5-9566-fc651a4829de req-e0c9ef35-f748-4cd9-98ea-a4d7df2bd626 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Processing event network-vif-plugged-3ae03bc4-7221-4da1-8e97-1a1ea168ac84 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 20:13:17 compute-0 kernel: tap4388b363-70: entered promiscuous mode
Dec 10 20:13:17 compute-0 nova_compute[189279]: 2025-12-10 20:13:17.410 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:17 compute-0 NetworkManager[56238]: <info>  [1765397597.4109] manager: (tap4388b363-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Dec 10 20:13:17 compute-0 nova_compute[189279]: 2025-12-10 20:13:17.414 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.415 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4388b363-70, col_values=(('external_ids', {'iface-id': 'c6649cf0-8544-4fa3-a1cf-44dddb6fbbdc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:17 compute-0 nova_compute[189279]: 2025-12-10 20:13:17.417 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:17 compute-0 ovn_controller[97701]: 2025-12-10T20:13:17Z|00115|binding|INFO|Releasing lport c6649cf0-8544-4fa3-a1cf-44dddb6fbbdc from this chassis (sb_readonly=0)
Dec 10 20:13:17 compute-0 nova_compute[189279]: 2025-12-10 20:13:17.442 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.446 106564 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4388b363-773a-4716-8c7d-00d02392bfdb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4388b363-773a-4716-8c7d-00d02392bfdb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.447 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e2141edf-1bcc-49a5-b3ae-0e3026703e31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.448 106564 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: global
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     log         /dev/log local0 debug
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     log-tag     haproxy-metadata-proxy-4388b363-773a-4716-8c7d-00d02392bfdb
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     user        root
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     group       root
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     maxconn     1024
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     pidfile     /var/lib/neutron/external/pids/4388b363-773a-4716-8c7d-00d02392bfdb.pid.haproxy
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     daemon
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: defaults
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     log global
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     mode http
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     option httplog
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     option dontlognull
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     option http-server-close
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     option forwardfor
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     retries                 3
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     timeout http-request    30s
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     timeout connect         30s
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     timeout client          32s
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     timeout server          32s
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     timeout http-keep-alive 30s
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: listen listener
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     bind 169.254.169.254:80
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     server metadata /var/lib/neutron/metadata_proxy
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:     http-request add-header X-OVN-Network-ID 4388b363-773a-4716-8c7d-00d02392bfdb
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 10 20:13:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:17.449 106564 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb', 'env', 'PROCESS_TAG=haproxy-4388b363-773a-4716-8c7d-00d02392bfdb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4388b363-773a-4716-8c7d-00d02392bfdb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 10 20:13:17 compute-0 nova_compute[189279]: 2025-12-10 20:13:17.967 189283 DEBUG nova.compute.manager [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 20:13:17 compute-0 nova_compute[189279]: 2025-12-10 20:13:17.969 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397597.9667573, a4a66175-57ff-48da-8473-e93f72da4499 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:17 compute-0 nova_compute[189279]: 2025-12-10 20:13:17.969 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a4a66175-57ff-48da-8473-e93f72da4499] VM Started (Lifecycle Event)
Dec 10 20:13:17 compute-0 podman[249711]: 2025-12-10 20:13:17.983422832 +0000 UTC m=+0.083202709 container create 8462988b5fb72bee9f253af738c473387f760e1262eb209f905ca15dba578cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 10 20:13:17 compute-0 nova_compute[189279]: 2025-12-10 20:13:17.988 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.001 189283 INFO nova.virt.libvirt.driver [-] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Instance spawned successfully.
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.002 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.018 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:18 compute-0 podman[249711]: 2025-12-10 20:13:17.941452888 +0000 UTC m=+0.041232805 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.049 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.061 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.062 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.063 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:18 compute-0 systemd[1]: Started libpod-conmon-8462988b5fb72bee9f253af738c473387f760e1262eb209f905ca15dba578cb4.scope.
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.067 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.072 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.075 189283 DEBUG nova.virt.libvirt.driver [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.081 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a4a66175-57ff-48da-8473-e93f72da4499] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.082 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397597.9679472, a4a66175-57ff-48da-8473-e93f72da4499 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.082 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a4a66175-57ff-48da-8473-e93f72da4499] VM Paused (Lifecycle Event)
Dec 10 20:13:18 compute-0 systemd[1]: Started libcrun container.
Dec 10 20:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c815b724e4217cb31fd3473513970c4c111cd7d49f74e11af5324501e04a90/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.111 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.119 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397597.97271, a4a66175-57ff-48da-8473-e93f72da4499 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.119 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a4a66175-57ff-48da-8473-e93f72da4499] VM Resumed (Lifecycle Event)
Dec 10 20:13:18 compute-0 podman[249711]: 2025-12-10 20:13:18.139795079 +0000 UTC m=+0.239574986 container init 8462988b5fb72bee9f253af738c473387f760e1262eb209f905ca15dba578cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.145 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:18 compute-0 podman[249711]: 2025-12-10 20:13:18.148100134 +0000 UTC m=+0.247880041 container start 8462988b5fb72bee9f253af738c473387f760e1262eb209f905ca15dba578cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.153 189283 INFO nova.compute.manager [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Took 9.20 seconds to spawn the instance on the hypervisor.
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.154 189283 DEBUG nova.compute.manager [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.156 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:13:18 compute-0 neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb[249727]: [NOTICE]   (249731) : New worker (249733) forked
Dec 10 20:13:18 compute-0 neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb[249727]: [NOTICE]   (249731) : Loading success.
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.203 189283 DEBUG nova.network.neutron [req-f08483e5-93fd-4c6c-8cdb-d5d5356ec995 req-8ddd7f19-731b-45d7-abfb-16375fd0106a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Updated VIF entry in instance network info cache for port 3ae03bc4-7221-4da1-8e97-1a1ea168ac84. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.204 189283 DEBUG nova.network.neutron [req-f08483e5-93fd-4c6c-8cdb-d5d5356ec995 req-8ddd7f19-731b-45d7-abfb-16375fd0106a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Updating instance_info_cache with network_info: [{"id": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "address": "fa:16:3e:a8:ab:64", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae03bc4-72", "ovs_interfaceid": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.208 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a4a66175-57ff-48da-8473-e93f72da4499] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.226 189283 DEBUG oslo_concurrency.lockutils [req-f08483e5-93fd-4c6c-8cdb-d5d5356ec995 req-8ddd7f19-731b-45d7-abfb-16375fd0106a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.255 189283 INFO nova.compute.manager [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Took 9.98 seconds to build instance.
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.272 189283 DEBUG oslo_concurrency.lockutils [None req-041949f2-3899-4bcd-bf3e-46e66b24e4b9 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a4a66175-57ff-48da-8473-e93f72da4499" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:18 compute-0 ovn_controller[97701]: 2025-12-10T20:13:18Z|00116|binding|INFO|Releasing lport 86b58a68-ab3c-4f05-ad6f-70a78da6a224 from this chassis (sb_readonly=0)
Dec 10 20:13:18 compute-0 ovn_controller[97701]: 2025-12-10T20:13:18Z|00117|binding|INFO|Releasing lport c6649cf0-8544-4fa3-a1cf-44dddb6fbbdc from this chassis (sb_readonly=0)
Dec 10 20:13:18 compute-0 ovn_controller[97701]: 2025-12-10T20:13:18Z|00118|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:13:18 compute-0 nova_compute[189279]: 2025-12-10 20:13:18.547 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:19 compute-0 nova_compute[189279]: 2025-12-10 20:13:19.678 189283 DEBUG nova.compute.manager [req-b12b330c-0925-45a4-b288-8c848d295ac1 req-33dc04b3-65f9-415a-9a99-741ba557959f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Received event network-vif-plugged-3ae03bc4-7221-4da1-8e97-1a1ea168ac84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:19 compute-0 nova_compute[189279]: 2025-12-10 20:13:19.680 189283 DEBUG oslo_concurrency.lockutils [req-b12b330c-0925-45a4-b288-8c848d295ac1 req-33dc04b3-65f9-415a-9a99-741ba557959f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "a4a66175-57ff-48da-8473-e93f72da4499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:19 compute-0 nova_compute[189279]: 2025-12-10 20:13:19.681 189283 DEBUG oslo_concurrency.lockutils [req-b12b330c-0925-45a4-b288-8c848d295ac1 req-33dc04b3-65f9-415a-9a99-741ba557959f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "a4a66175-57ff-48da-8473-e93f72da4499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:19 compute-0 nova_compute[189279]: 2025-12-10 20:13:19.682 189283 DEBUG oslo_concurrency.lockutils [req-b12b330c-0925-45a4-b288-8c848d295ac1 req-33dc04b3-65f9-415a-9a99-741ba557959f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "a4a66175-57ff-48da-8473-e93f72da4499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:19 compute-0 nova_compute[189279]: 2025-12-10 20:13:19.682 189283 DEBUG nova.compute.manager [req-b12b330c-0925-45a4-b288-8c848d295ac1 req-33dc04b3-65f9-415a-9a99-741ba557959f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] No waiting events found dispatching network-vif-plugged-3ae03bc4-7221-4da1-8e97-1a1ea168ac84 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:13:19 compute-0 nova_compute[189279]: 2025-12-10 20:13:19.682 189283 WARNING nova.compute.manager [req-b12b330c-0925-45a4-b288-8c848d295ac1 req-33dc04b3-65f9-415a-9a99-741ba557959f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Received unexpected event network-vif-plugged-3ae03bc4-7221-4da1-8e97-1a1ea168ac84 for instance with vm_state active and task_state None.
Dec 10 20:13:19 compute-0 nova_compute[189279]: 2025-12-10 20:13:19.737 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:20 compute-0 nova_compute[189279]: 2025-12-10 20:13:20.839 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:21 compute-0 podman[249744]: 2025-12-10 20:13:21.102339634 +0000 UTC m=+0.075214384 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 20:13:21 compute-0 podman[249743]: 2025-12-10 20:13:21.126448466 +0000 UTC m=+0.102358088 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 10 20:13:21 compute-0 nova_compute[189279]: 2025-12-10 20:13:21.157 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:21 compute-0 nova_compute[189279]: 2025-12-10 20:13:21.959 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:23 compute-0 nova_compute[189279]: 2025-12-10 20:13:23.237 189283 DEBUG nova.compute.manager [req-9da719af-94a5-4095-8665-7ea17a13067b req-888c1b45-7797-4630-a812-ea07580d7e26 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Received event network-changed-3ae03bc4-7221-4da1-8e97-1a1ea168ac84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:23 compute-0 nova_compute[189279]: 2025-12-10 20:13:23.238 189283 DEBUG nova.compute.manager [req-9da719af-94a5-4095-8665-7ea17a13067b req-888c1b45-7797-4630-a812-ea07580d7e26 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Refreshing instance network info cache due to event network-changed-3ae03bc4-7221-4da1-8e97-1a1ea168ac84. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:13:23 compute-0 nova_compute[189279]: 2025-12-10 20:13:23.239 189283 DEBUG oslo_concurrency.lockutils [req-9da719af-94a5-4095-8665-7ea17a13067b req-888c1b45-7797-4630-a812-ea07580d7e26 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:13:23 compute-0 nova_compute[189279]: 2025-12-10 20:13:23.240 189283 DEBUG oslo_concurrency.lockutils [req-9da719af-94a5-4095-8665-7ea17a13067b req-888c1b45-7797-4630-a812-ea07580d7e26 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:13:23 compute-0 nova_compute[189279]: 2025-12-10 20:13:23.241 189283 DEBUG nova.network.neutron [req-9da719af-94a5-4095-8665-7ea17a13067b req-888c1b45-7797-4630-a812-ea07580d7e26 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Refreshing network info cache for port 3ae03bc4-7221-4da1-8e97-1a1ea168ac84 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:13:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:23.396 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:23.397 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:23.399 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:23 compute-0 nova_compute[189279]: 2025-12-10 20:13:23.472 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765397588.46009, 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:23 compute-0 nova_compute[189279]: 2025-12-10 20:13:23.474 189283 INFO nova.compute.manager [-] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] VM Stopped (Lifecycle Event)
Dec 10 20:13:23 compute-0 nova_compute[189279]: 2025-12-10 20:13:23.515 189283 DEBUG nova.compute.manager [None req-3704a034-6a7e-4ad2-a850-05b9d0ebef1d - - - - - -] [instance: 89dd49b4-ab03-4bc5-84ea-a2ae3b040e06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:24 compute-0 podman[249782]: 2025-12-10 20:13:24.147010849 +0000 UTC m=+0.127954239 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec 10 20:13:25 compute-0 nova_compute[189279]: 2025-12-10 20:13:25.682 189283 DEBUG nova.network.neutron [req-9da719af-94a5-4095-8665-7ea17a13067b req-888c1b45-7797-4630-a812-ea07580d7e26 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Updated VIF entry in instance network info cache for port 3ae03bc4-7221-4da1-8e97-1a1ea168ac84. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:13:25 compute-0 nova_compute[189279]: 2025-12-10 20:13:25.682 189283 DEBUG nova.network.neutron [req-9da719af-94a5-4095-8665-7ea17a13067b req-888c1b45-7797-4630-a812-ea07580d7e26 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Updating instance_info_cache with network_info: [{"id": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "address": "fa:16:3e:a8:ab:64", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae03bc4-72", "ovs_interfaceid": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:25 compute-0 nova_compute[189279]: 2025-12-10 20:13:25.701 189283 DEBUG oslo_concurrency.lockutils [req-9da719af-94a5-4095-8665-7ea17a13067b req-888c1b45-7797-4630-a812-ea07580d7e26 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:13:26 compute-0 nova_compute[189279]: 2025-12-10 20:13:26.160 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:26 compute-0 nova_compute[189279]: 2025-12-10 20:13:26.924 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:26 compute-0 nova_compute[189279]: 2025-12-10 20:13:26.962 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:27 compute-0 sshd-session[249805]: Invalid user solv from 80.94.92.184 port 60144
Dec 10 20:13:27 compute-0 nova_compute[189279]: 2025-12-10 20:13:27.704 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:27 compute-0 sshd-session[249805]: Connection closed by invalid user solv 80.94.92.184 port 60144 [preauth]
Dec 10 20:13:28 compute-0 nova_compute[189279]: 2025-12-10 20:13:28.330 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:29 compute-0 podman[249807]: 2025-12-10 20:13:29.101383092 +0000 UTC m=+0.082585323 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, config_id=edpm, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute)
Dec 10 20:13:29 compute-0 podman[203484]: time="2025-12-10T20:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:13:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31989 "" "Go-http-client/1.1"
Dec 10 20:13:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5729 "" "Go-http-client/1.1"
Dec 10 20:13:31 compute-0 nova_compute[189279]: 2025-12-10 20:13:31.163 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:31 compute-0 openstack_network_exporter[205632]: ERROR   20:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:13:31 compute-0 openstack_network_exporter[205632]: ERROR   20:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:13:31 compute-0 openstack_network_exporter[205632]: ERROR   20:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:13:31 compute-0 openstack_network_exporter[205632]: ERROR   20:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:13:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:13:31 compute-0 openstack_network_exporter[205632]: ERROR   20:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:13:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:13:31 compute-0 nova_compute[189279]: 2025-12-10 20:13:31.964 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:32 compute-0 nova_compute[189279]: 2025-12-10 20:13:32.324 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:34 compute-0 nova_compute[189279]: 2025-12-10 20:13:34.860 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Acquiring lock "1de5d51a-1c96-47d3-9e57-500874113cc5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:34 compute-0 nova_compute[189279]: 2025-12-10 20:13:34.861 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:34 compute-0 nova_compute[189279]: 2025-12-10 20:13:34.896 189283 DEBUG nova.compute.manager [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 20:13:34 compute-0 nova_compute[189279]: 2025-12-10 20:13:34.994 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:34 compute-0 nova_compute[189279]: 2025-12-10 20:13:34.995 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.005 189283 DEBUG nova.virt.hardware [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.006 189283 INFO nova.compute.claims [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Claim successful on node compute-0.ctlplane.example.com
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.146 189283 DEBUG nova.compute.provider_tree [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.179 189283 DEBUG nova.scheduler.client.report [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.204 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.206 189283 DEBUG nova.compute.manager [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.260 189283 DEBUG nova.compute.manager [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.262 189283 DEBUG nova.network.neutron [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.285 189283 INFO nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.304 189283 DEBUG nova.compute.manager [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.395 189283 DEBUG nova.compute.manager [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.397 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.405 189283 INFO nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Creating image(s)
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.406 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Acquiring lock "/var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.406 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "/var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.407 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "/var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.427 189283 DEBUG oslo_concurrency.processutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.461 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Acquiring lock "47d38c42-e665-400f-831e-4bb560cd5fdb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.462 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.482 189283 DEBUG nova.compute.manager [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.512 189283 DEBUG oslo_concurrency.processutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.515 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Acquiring lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.516 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.532 189283 DEBUG oslo_concurrency.processutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.560 189283 DEBUG nova.policy [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '95e701c408554b41bc92928902567588', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd8eb26407c54625b02b8a9d59d7c0db', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.592 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.593 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.603 189283 DEBUG nova.virt.hardware [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.604 189283 INFO nova.compute.claims [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Claim successful on node compute-0.ctlplane.example.com
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.609 189283 DEBUG oslo_concurrency.processutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.610 189283 DEBUG oslo_concurrency.processutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:35 compute-0 ovn_controller[97701]: 2025-12-10T20:13:35Z|00119|binding|INFO|Releasing lport 86b58a68-ab3c-4f05-ad6f-70a78da6a224 from this chassis (sb_readonly=0)
Dec 10 20:13:35 compute-0 ovn_controller[97701]: 2025-12-10T20:13:35Z|00120|binding|INFO|Releasing lport c6649cf0-8544-4fa3-a1cf-44dddb6fbbdc from this chassis (sb_readonly=0)
Dec 10 20:13:35 compute-0 ovn_controller[97701]: 2025-12-10T20:13:35Z|00121|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.663 189283 DEBUG oslo_concurrency.processutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/disk 1073741824" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.677 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.686 189283 DEBUG oslo_concurrency.processutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.728 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.765 189283 DEBUG oslo_concurrency.processutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.767 189283 DEBUG nova.virt.disk.api [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Checking if we can resize image /var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.768 189283 DEBUG oslo_concurrency.processutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.852 189283 DEBUG oslo_concurrency.processutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.853 189283 DEBUG nova.virt.disk.api [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Cannot resize image /var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.854 189283 DEBUG nova.objects.instance [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lazy-loading 'migration_context' on Instance uuid 1de5d51a-1c96-47d3-9e57-500874113cc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.877 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.878 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Ensure instance console log exists: /var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.879 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.880 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.880 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:35 compute-0 nova_compute[189279]: 2025-12-10 20:13:35.997 189283 DEBUG nova.compute.provider_tree [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.018 189283 DEBUG nova.scheduler.client.report [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.047 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.049 189283 DEBUG nova.compute.manager [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.097 189283 DEBUG nova.compute.manager [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.098 189283 DEBUG nova.network.neutron [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.120 189283 INFO nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.143 189283 DEBUG nova.compute.manager [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.168 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.235 189283 DEBUG nova.compute.manager [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.237 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.238 189283 INFO nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Creating image(s)
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.240 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Acquiring lock "/var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.240 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "/var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.242 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "/var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.257 189283 DEBUG oslo_concurrency.processutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.341 189283 DEBUG oslo_concurrency.processutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.345 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Acquiring lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.347 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.368 189283 DEBUG oslo_concurrency.processutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.403 189283 DEBUG nova.policy [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3f9ff7d7d145486fb37626518d98db5e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '76049963481942ac8475b7a40994cc54', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.461 189283 DEBUG oslo_concurrency.processutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.466 189283 DEBUG oslo_concurrency.processutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.529 189283 DEBUG oslo_concurrency.processutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/disk 1073741824" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.536 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.541 189283 DEBUG oslo_concurrency.processutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.628 189283 DEBUG oslo_concurrency.processutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.630 189283 DEBUG nova.virt.disk.api [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Checking if we can resize image /var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.631 189283 DEBUG oslo_concurrency.processutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:36 compute-0 ovn_controller[97701]: 2025-12-10T20:13:36Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:c2:44 10.100.0.11
Dec 10 20:13:36 compute-0 ovn_controller[97701]: 2025-12-10T20:13:36Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:c2:44 10.100.0.11
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.746 189283 DEBUG oslo_concurrency.processutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/disk --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.748 189283 DEBUG nova.virt.disk.api [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Cannot resize image /var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.749 189283 DEBUG nova.objects.instance [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lazy-loading 'migration_context' on Instance uuid 47d38c42-e665-400f-831e-4bb560cd5fdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.764 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.764 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Ensure instance console log exists: /var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.765 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.765 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.766 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.966 189283 DEBUG nova.network.neutron [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Successfully created port: 200d878c-fe4b-43e4-bae3-5d660334bbc3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 10 20:13:36 compute-0 nova_compute[189279]: 2025-12-10 20:13:36.970 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:37 compute-0 nova_compute[189279]: 2025-12-10 20:13:37.448 189283 DEBUG nova.network.neutron [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Successfully created port: ac26be7d-6e8b-41ce-b924-41df4889751e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 10 20:13:37 compute-0 ovn_controller[97701]: 2025-12-10T20:13:37Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:b0:0b 10.100.0.8
Dec 10 20:13:37 compute-0 ovn_controller[97701]: 2025-12-10T20:13:37Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:b0:0b 10.100.0.8
Dec 10 20:13:38 compute-0 podman[249891]: 2025-12-10 20:13:38.120288466 +0000 UTC m=+0.096340386 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:13:38 compute-0 podman[249892]: 2025-12-10 20:13:38.124736876 +0000 UTC m=+0.097287321 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, architecture=x86_64, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 10 20:13:38 compute-0 nova_compute[189279]: 2025-12-10 20:13:38.985 189283 DEBUG nova.network.neutron [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Successfully updated port: 200d878c-fe4b-43e4-bae3-5d660334bbc3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.008 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Acquiring lock "refresh_cache-1de5d51a-1c96-47d3-9e57-500874113cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.008 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Acquired lock "refresh_cache-1de5d51a-1c96-47d3-9e57-500874113cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.008 189283 DEBUG nova.network.neutron [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.118 189283 DEBUG nova.compute.manager [req-5c387ffc-7b89-49ca-b5a1-fce5208e40b8 req-ce456379-5012-4cdf-bee1-fb2abcea1f7a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Received event network-changed-200d878c-fe4b-43e4-bae3-5d660334bbc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.118 189283 DEBUG nova.compute.manager [req-5c387ffc-7b89-49ca-b5a1-fce5208e40b8 req-ce456379-5012-4cdf-bee1-fb2abcea1f7a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Refreshing instance network info cache due to event network-changed-200d878c-fe4b-43e4-bae3-5d660334bbc3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.119 189283 DEBUG oslo_concurrency.lockutils [req-5c387ffc-7b89-49ca-b5a1-fce5208e40b8 req-ce456379-5012-4cdf-bee1-fb2abcea1f7a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-1de5d51a-1c96-47d3-9e57-500874113cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.190 189283 DEBUG nova.network.neutron [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Successfully updated port: ac26be7d-6e8b-41ce-b924-41df4889751e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.208 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Acquiring lock "refresh_cache-47d38c42-e665-400f-831e-4bb560cd5fdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.209 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Acquired lock "refresh_cache-47d38c42-e665-400f-831e-4bb560cd5fdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.209 189283 DEBUG nova.network.neutron [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.254 189283 DEBUG nova.network.neutron [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.357 189283 DEBUG nova.compute.manager [req-de08d318-4355-407d-a5a1-e16b0a47231d req-35b653de-4f68-438e-88d5-b4135f025750 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Received event network-changed-ac26be7d-6e8b-41ce-b924-41df4889751e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.358 189283 DEBUG nova.compute.manager [req-de08d318-4355-407d-a5a1-e16b0a47231d req-35b653de-4f68-438e-88d5-b4135f025750 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Refreshing instance network info cache due to event network-changed-ac26be7d-6e8b-41ce-b924-41df4889751e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.358 189283 DEBUG oslo_concurrency.lockutils [req-de08d318-4355-407d-a5a1-e16b0a47231d req-35b653de-4f68-438e-88d5-b4135f025750 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-47d38c42-e665-400f-831e-4bb560cd5fdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:13:39 compute-0 nova_compute[189279]: 2025-12-10 20:13:39.912 189283 DEBUG nova.network.neutron [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.172 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.242 189283 DEBUG nova.network.neutron [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Updating instance_info_cache with network_info: [{"id": "ac26be7d-6e8b-41ce-b924-41df4889751e", "address": "fa:16:3e:3a:af:0b", "network": {"id": "a84a3a12-17fa-4570-b2cb-3daff5d43bee", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325546459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76049963481942ac8475b7a40994cc54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac26be7d-6e", "ovs_interfaceid": "ac26be7d-6e8b-41ce-b924-41df4889751e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.272 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Releasing lock "refresh_cache-47d38c42-e665-400f-831e-4bb560cd5fdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.273 189283 DEBUG nova.compute.manager [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Instance network_info: |[{"id": "ac26be7d-6e8b-41ce-b924-41df4889751e", "address": "fa:16:3e:3a:af:0b", "network": {"id": "a84a3a12-17fa-4570-b2cb-3daff5d43bee", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325546459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76049963481942ac8475b7a40994cc54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac26be7d-6e", "ovs_interfaceid": "ac26be7d-6e8b-41ce-b924-41df4889751e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.274 189283 DEBUG oslo_concurrency.lockutils [req-de08d318-4355-407d-a5a1-e16b0a47231d req-35b653de-4f68-438e-88d5-b4135f025750 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-47d38c42-e665-400f-831e-4bb560cd5fdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.274 189283 DEBUG nova.network.neutron [req-de08d318-4355-407d-a5a1-e16b0a47231d req-35b653de-4f68-438e-88d5-b4135f025750 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Refreshing network info cache for port ac26be7d-6e8b-41ce-b924-41df4889751e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.282 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Start _get_guest_xml network_info=[{"id": "ac26be7d-6e8b-41ce-b924-41df4889751e", "address": "fa:16:3e:3a:af:0b", "network": {"id": "a84a3a12-17fa-4570-b2cb-3daff5d43bee", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325546459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76049963481942ac8475b7a40994cc54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac26be7d-6e", "ovs_interfaceid": "ac26be7d-6e8b-41ce-b924-41df4889751e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '33b11153-486b-4d32-bc63-6b6a6ed0b704'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.295 189283 WARNING nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.312 189283 DEBUG nova.virt.libvirt.host [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.314 189283 DEBUG nova.virt.libvirt.host [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.320 189283 DEBUG nova.virt.libvirt.host [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.321 189283 DEBUG nova.virt.libvirt.host [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.322 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.322 189283 DEBUG nova.virt.hardware [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T20:11:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.323 189283 DEBUG nova.virt.hardware [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.323 189283 DEBUG nova.virt.hardware [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.323 189283 DEBUG nova.virt.hardware [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.324 189283 DEBUG nova.virt.hardware [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.324 189283 DEBUG nova.virt.hardware [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.324 189283 DEBUG nova.virt.hardware [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.324 189283 DEBUG nova.virt.hardware [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.325 189283 DEBUG nova.virt.hardware [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.325 189283 DEBUG nova.virt.hardware [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.325 189283 DEBUG nova.virt.hardware [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.330 189283 DEBUG nova.virt.libvirt.vif [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-849312876',display_name='tempest-ServerAddressesTestJSON-server-849312876',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-849312876',id=12,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76049963481942ac8475b7a40994cc54',ramdisk_id='',reservation_id='r-noeicbi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-421083304',owner_user_name='tempest-ServerAddressesTestJSON-421083304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:13:36Z,user_data=None,user_id='3f9ff7d7d145486fb37626518d98db5e',uuid=47d38c42-e665-400f-831e-4bb560cd5fdb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac26be7d-6e8b-41ce-b924-41df4889751e", "address": "fa:16:3e:3a:af:0b", "network": {"id": "a84a3a12-17fa-4570-b2cb-3daff5d43bee", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325546459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76049963481942ac8475b7a40994cc54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac26be7d-6e", "ovs_interfaceid": "ac26be7d-6e8b-41ce-b924-41df4889751e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.330 189283 DEBUG nova.network.os_vif_util [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Converting VIF {"id": "ac26be7d-6e8b-41ce-b924-41df4889751e", "address": "fa:16:3e:3a:af:0b", "network": {"id": "a84a3a12-17fa-4570-b2cb-3daff5d43bee", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325546459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76049963481942ac8475b7a40994cc54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac26be7d-6e", "ovs_interfaceid": "ac26be7d-6e8b-41ce-b924-41df4889751e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.331 189283 DEBUG nova.network.os_vif_util [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:af:0b,bridge_name='br-int',has_traffic_filtering=True,id=ac26be7d-6e8b-41ce-b924-41df4889751e,network=Network(a84a3a12-17fa-4570-b2cb-3daff5d43bee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac26be7d-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.332 189283 DEBUG nova.objects.instance [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lazy-loading 'pci_devices' on Instance uuid 47d38c42-e665-400f-831e-4bb560cd5fdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.349 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] End _get_guest_xml xml=<domain type="kvm">
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <uuid>47d38c42-e665-400f-831e-4bb560cd5fdb</uuid>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <name>instance-0000000c</name>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <memory>131072</memory>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <metadata>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <nova:name>tempest-ServerAddressesTestJSON-server-849312876</nova:name>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 20:13:41</nova:creationTime>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <nova:flavor name="m1.nano">
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:memory>128</nova:memory>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:ephemeral>0</nova:ephemeral>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:user uuid="3f9ff7d7d145486fb37626518d98db5e">tempest-ServerAddressesTestJSON-421083304-project-member</nova:user>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:project uuid="76049963481942ac8475b7a40994cc54">tempest-ServerAddressesTestJSON-421083304</nova:project>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="33b11153-486b-4d32-bc63-6b6a6ed0b704"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:port uuid="ac26be7d-6e8b-41ce-b924-41df4889751e">
Dec 10 20:13:41 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   </metadata>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <system>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <entry name="serial">47d38c42-e665-400f-831e-4bb560cd5fdb</entry>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <entry name="uuid">47d38c42-e665-400f-831e-4bb560cd5fdb</entry>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </system>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <os>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   </os>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <features>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <apic/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   </features>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   </clock>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   </cpu>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <devices>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/disk"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/disk.config"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:3a:af:0b"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <target dev="tapac26be7d-6e"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </interface>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/console.log" append="off"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </serial>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <video>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </video>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </rng>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   </devices>
Dec 10 20:13:41 compute-0 nova_compute[189279]: </domain>
Dec 10 20:13:41 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.350 189283 DEBUG nova.compute.manager [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Preparing to wait for external event network-vif-plugged-ac26be7d-6e8b-41ce-b924-41df4889751e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.350 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Acquiring lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.351 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.351 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.352 189283 DEBUG nova.virt.libvirt.vif [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-849312876',display_name='tempest-ServerAddressesTestJSON-server-849312876',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-849312876',id=12,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76049963481942ac8475b7a40994cc54',ramdisk_id='',reservation_id='r-noeicbi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-421083304',owner_user_name='tempest-ServerAddressesTestJSON-421083304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:13:36Z,user_data=None,user_id='3f9ff7d7d145486fb37626518d98db5e',uuid=47d38c42-e665-400f-831e-4bb560cd5fdb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac26be7d-6e8b-41ce-b924-41df4889751e", "address": "fa:16:3e:3a:af:0b", "network": {"id": "a84a3a12-17fa-4570-b2cb-3daff5d43bee", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325546459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76049963481942ac8475b7a40994cc54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac26be7d-6e", "ovs_interfaceid": "ac26be7d-6e8b-41ce-b924-41df4889751e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.352 189283 DEBUG nova.network.os_vif_util [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Converting VIF {"id": "ac26be7d-6e8b-41ce-b924-41df4889751e", "address": "fa:16:3e:3a:af:0b", "network": {"id": "a84a3a12-17fa-4570-b2cb-3daff5d43bee", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325546459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76049963481942ac8475b7a40994cc54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac26be7d-6e", "ovs_interfaceid": "ac26be7d-6e8b-41ce-b924-41df4889751e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.353 189283 DEBUG nova.network.os_vif_util [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:af:0b,bridge_name='br-int',has_traffic_filtering=True,id=ac26be7d-6e8b-41ce-b924-41df4889751e,network=Network(a84a3a12-17fa-4570-b2cb-3daff5d43bee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac26be7d-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.354 189283 DEBUG os_vif [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:af:0b,bridge_name='br-int',has_traffic_filtering=True,id=ac26be7d-6e8b-41ce-b924-41df4889751e,network=Network(a84a3a12-17fa-4570-b2cb-3daff5d43bee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac26be7d-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.354 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.355 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.356 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.360 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.360 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac26be7d-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.361 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac26be7d-6e, col_values=(('external_ids', {'iface-id': 'ac26be7d-6e8b-41ce-b924-41df4889751e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:af:0b', 'vm-uuid': '47d38c42-e665-400f-831e-4bb560cd5fdb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.364 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.366 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:13:41 compute-0 NetworkManager[56238]: <info>  [1765397621.3671] manager: (tapac26be7d-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.376 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.377 189283 INFO os_vif [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:af:0b,bridge_name='br-int',has_traffic_filtering=True,id=ac26be7d-6e8b-41ce-b924-41df4889751e,network=Network(a84a3a12-17fa-4570-b2cb-3daff5d43bee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac26be7d-6e')
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.383 189283 DEBUG nova.network.neutron [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Updating instance_info_cache with network_info: [{"id": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "address": "fa:16:3e:05:dd:d2", "network": {"id": "5d2be28c-5f23-435e-b8fc-cc5d72257618", "bridge": "br-int", "label": "tempest-ServersTestJSON-1378806211-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8eb26407c54625b02b8a9d59d7c0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap200d878c-fe", "ovs_interfaceid": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.628 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Releasing lock "refresh_cache-1de5d51a-1c96-47d3-9e57-500874113cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.628 189283 DEBUG nova.compute.manager [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Instance network_info: |[{"id": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "address": "fa:16:3e:05:dd:d2", "network": {"id": "5d2be28c-5f23-435e-b8fc-cc5d72257618", "bridge": "br-int", "label": "tempest-ServersTestJSON-1378806211-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8eb26407c54625b02b8a9d59d7c0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap200d878c-fe", "ovs_interfaceid": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.629 189283 DEBUG oslo_concurrency.lockutils [req-5c387ffc-7b89-49ca-b5a1-fce5208e40b8 req-ce456379-5012-4cdf-bee1-fb2abcea1f7a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-1de5d51a-1c96-47d3-9e57-500874113cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.630 189283 DEBUG nova.network.neutron [req-5c387ffc-7b89-49ca-b5a1-fce5208e40b8 req-ce456379-5012-4cdf-bee1-fb2abcea1f7a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Refreshing network info cache for port 200d878c-fe4b-43e4-bae3-5d660334bbc3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.635 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Start _get_guest_xml network_info=[{"id": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "address": "fa:16:3e:05:dd:d2", "network": {"id": "5d2be28c-5f23-435e-b8fc-cc5d72257618", "bridge": "br-int", "label": "tempest-ServersTestJSON-1378806211-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8eb26407c54625b02b8a9d59d7c0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap200d878c-fe", "ovs_interfaceid": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '33b11153-486b-4d32-bc63-6b6a6ed0b704'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.649 189283 WARNING nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.662 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.662 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.663 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] No VIF found with MAC fa:16:3e:3a:af:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.663 189283 INFO nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Using config drive
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.667 189283 DEBUG nova.virt.libvirt.host [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.669 189283 DEBUG nova.virt.libvirt.host [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.676 189283 DEBUG nova.virt.libvirt.host [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.677 189283 DEBUG nova.virt.libvirt.host [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.678 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.678 189283 DEBUG nova.virt.hardware [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T20:11:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.679 189283 DEBUG nova.virt.hardware [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.679 189283 DEBUG nova.virt.hardware [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.679 189283 DEBUG nova.virt.hardware [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.680 189283 DEBUG nova.virt.hardware [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.680 189283 DEBUG nova.virt.hardware [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.680 189283 DEBUG nova.virt.hardware [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.681 189283 DEBUG nova.virt.hardware [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.681 189283 DEBUG nova.virt.hardware [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.682 189283 DEBUG nova.virt.hardware [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.682 189283 DEBUG nova.virt.hardware [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.688 189283 DEBUG nova.virt.libvirt.vif [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-633682198',display_name='tempest-ServersTestJSON-server-633682198',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-633682198',id=11,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPas89f0VmbRy1Z20sjM939aVj0TNS+R5hgHhKIpN+Lu2sUioSpktjVErWL7xY1SOKpwoWvlEg9TaORbUb+yc3R318/CP5Gjft0vHca1BcBEnIu2/PQSvezTTIQ460wB3w==',key_name='tempest-keypair-887805161',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8eb26407c54625b02b8a9d59d7c0db',ramdisk_id='',reservation_id='r-jfkck80y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-107536503',owner_user_name='tempest-ServersTestJSON-107536503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:13:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95e701c408554b41bc92928902567588',uuid=1de5d51a-1c96-47d3-9e57-500874113cc5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "address": "fa:16:3e:05:dd:d2", "network": {"id": "5d2be28c-5f23-435e-b8fc-cc5d72257618", "bridge": "br-int", "label": "tempest-ServersTestJSON-1378806211-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8eb26407c54625b02b8a9d59d7c0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap200d878c-fe", "ovs_interfaceid": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.688 189283 DEBUG nova.network.os_vif_util [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Converting VIF {"id": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "address": "fa:16:3e:05:dd:d2", "network": {"id": "5d2be28c-5f23-435e-b8fc-cc5d72257618", "bridge": "br-int", "label": "tempest-ServersTestJSON-1378806211-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8eb26407c54625b02b8a9d59d7c0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap200d878c-fe", "ovs_interfaceid": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.689 189283 DEBUG nova.network.os_vif_util [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:dd:d2,bridge_name='br-int',has_traffic_filtering=True,id=200d878c-fe4b-43e4-bae3-5d660334bbc3,network=Network(5d2be28c-5f23-435e-b8fc-cc5d72257618),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap200d878c-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.690 189283 DEBUG nova.objects.instance [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lazy-loading 'pci_devices' on Instance uuid 1de5d51a-1c96-47d3-9e57-500874113cc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.705 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] End _get_guest_xml xml=<domain type="kvm">
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <uuid>1de5d51a-1c96-47d3-9e57-500874113cc5</uuid>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <name>instance-0000000b</name>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <memory>131072</memory>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <metadata>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <nova:name>tempest-ServersTestJSON-server-633682198</nova:name>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 20:13:41</nova:creationTime>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <nova:flavor name="m1.nano">
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:memory>128</nova:memory>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:ephemeral>0</nova:ephemeral>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:user uuid="95e701c408554b41bc92928902567588">tempest-ServersTestJSON-107536503-project-member</nova:user>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:project uuid="fd8eb26407c54625b02b8a9d59d7c0db">tempest-ServersTestJSON-107536503</nova:project>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="33b11153-486b-4d32-bc63-6b6a6ed0b704"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         <nova:port uuid="200d878c-fe4b-43e4-bae3-5d660334bbc3">
Dec 10 20:13:41 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   </metadata>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <system>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <entry name="serial">1de5d51a-1c96-47d3-9e57-500874113cc5</entry>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <entry name="uuid">1de5d51a-1c96-47d3-9e57-500874113cc5</entry>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </system>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <os>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   </os>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <features>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <apic/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   </features>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   </clock>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   </cpu>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   <devices>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/disk"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/disk.config"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:05:dd:d2"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <target dev="tap200d878c-fe"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </interface>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/console.log" append="off"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </serial>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <video>
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </video>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </rng>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 20:13:41 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 20:13:41 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 20:13:41 compute-0 nova_compute[189279]:   </devices>
Dec 10 20:13:41 compute-0 nova_compute[189279]: </domain>
Dec 10 20:13:41 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.705 189283 DEBUG nova.compute.manager [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Preparing to wait for external event network-vif-plugged-200d878c-fe4b-43e4-bae3-5d660334bbc3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.705 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Acquiring lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.706 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.706 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.707 189283 DEBUG nova.virt.libvirt.vif [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-633682198',display_name='tempest-ServersTestJSON-server-633682198',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-633682198',id=11,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPas89f0VmbRy1Z20sjM939aVj0TNS+R5hgHhKIpN+Lu2sUioSpktjVErWL7xY1SOKpwoWvlEg9TaORbUb+yc3R318/CP5Gjft0vHca1BcBEnIu2/PQSvezTTIQ460wB3w==',key_name='tempest-keypair-887805161',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8eb26407c54625b02b8a9d59d7c0db',ramdisk_id='',reservation_id='r-jfkck80y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-107536503',owner_user_name='tempest-ServersTestJSON-107536503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:13:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95e701c408554b41bc92928902567588',uuid=1de5d51a-1c96-47d3-9e57-500874113cc5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "address": "fa:16:3e:05:dd:d2", "network": {"id": "5d2be28c-5f23-435e-b8fc-cc5d72257618", "bridge": "br-int", "label": "tempest-ServersTestJSON-1378806211-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8eb26407c54625b02b8a9d59d7c0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap200d878c-fe", "ovs_interfaceid": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.707 189283 DEBUG nova.network.os_vif_util [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Converting VIF {"id": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "address": "fa:16:3e:05:dd:d2", "network": {"id": "5d2be28c-5f23-435e-b8fc-cc5d72257618", "bridge": "br-int", "label": "tempest-ServersTestJSON-1378806211-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8eb26407c54625b02b8a9d59d7c0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap200d878c-fe", "ovs_interfaceid": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.708 189283 DEBUG nova.network.os_vif_util [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:dd:d2,bridge_name='br-int',has_traffic_filtering=True,id=200d878c-fe4b-43e4-bae3-5d660334bbc3,network=Network(5d2be28c-5f23-435e-b8fc-cc5d72257618),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap200d878c-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.708 189283 DEBUG os_vif [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:dd:d2,bridge_name='br-int',has_traffic_filtering=True,id=200d878c-fe4b-43e4-bae3-5d660334bbc3,network=Network(5d2be28c-5f23-435e-b8fc-cc5d72257618),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap200d878c-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.709 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.710 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.710 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.714 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.714 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap200d878c-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.714 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap200d878c-fe, col_values=(('external_ids', {'iface-id': '200d878c-fe4b-43e4-bae3-5d660334bbc3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:dd:d2', 'vm-uuid': '1de5d51a-1c96-47d3-9e57-500874113cc5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.717 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:41 compute-0 NetworkManager[56238]: <info>  [1765397621.7182] manager: (tap200d878c-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.719 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.731 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.732 189283 INFO os_vif [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:dd:d2,bridge_name='br-int',has_traffic_filtering=True,id=200d878c-fe4b-43e4-bae3-5d660334bbc3,network=Network(5d2be28c-5f23-435e-b8fc-cc5d72257618),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap200d878c-fe')
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.819 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.820 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.820 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] No VIF found with MAC fa:16:3e:05:dd:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.821 189283 INFO nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Using config drive
Dec 10 20:13:41 compute-0 nova_compute[189279]: 2025-12-10 20:13:41.969 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.181 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.182 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.182 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.183 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.185 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa15dbef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.198 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 63639261-d8d9-46e1-8b3f-55af36a85e58 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 10 20:13:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:42.200 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/63639261-d8d9-46e1-8b3f-55af36a85e58 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a6b7809c80638f6e016296d2f243706fded356213cefb5a3f70c31b120afa2c9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.255 189283 INFO nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Creating config drive at /var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/disk.config
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.265 189283 DEBUG oslo_concurrency.processutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg1ah0sjm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.392 189283 DEBUG oslo_concurrency.processutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg1ah0sjm" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.437 189283 INFO nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Creating config drive at /var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/disk.config
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.442 189283 DEBUG oslo_concurrency.processutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa9kxtoxb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:13:42 compute-0 kernel: tapac26be7d-6e: entered promiscuous mode
Dec 10 20:13:42 compute-0 NetworkManager[56238]: <info>  [1765397622.4691] manager: (tapac26be7d-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.476 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:42 compute-0 ovn_controller[97701]: 2025-12-10T20:13:42Z|00122|binding|INFO|Claiming lport ac26be7d-6e8b-41ce-b924-41df4889751e for this chassis.
Dec 10 20:13:42 compute-0 ovn_controller[97701]: 2025-12-10T20:13:42Z|00123|binding|INFO|ac26be7d-6e8b-41ce-b924-41df4889751e: Claiming fa:16:3e:3a:af:0b 10.100.0.5
Dec 10 20:13:42 compute-0 ovn_controller[97701]: 2025-12-10T20:13:42Z|00124|binding|INFO|Setting lport ac26be7d-6e8b-41ce-b924-41df4889751e ovn-installed in OVS
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.509 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.512 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:42 compute-0 systemd-udevd[249961]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:13:42 compute-0 systemd-machined[155642]: New machine qemu-11-instance-0000000c.
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.532 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:af:0b 10.100.0.5'], port_security=['fa:16:3e:3a:af:0b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '47d38c42-e665-400f-831e-4bb560cd5fdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a84a3a12-17fa-4570-b2cb-3daff5d43bee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76049963481942ac8475b7a40994cc54', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fe177918-79b5-4e8a-b8fc-7103c8813c05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9861aac3-63df-40e8-b1d9-ec52094621a8, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=ac26be7d-6e8b-41ce-b924-41df4889751e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.533 106564 INFO neutron.agent.ovn.metadata.agent [-] Port ac26be7d-6e8b-41ce-b924-41df4889751e in datapath a84a3a12-17fa-4570-b2cb-3daff5d43bee bound to our chassis
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.536 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a84a3a12-17fa-4570-b2cb-3daff5d43bee
Dec 10 20:13:42 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000c.
Dec 10 20:13:42 compute-0 ovn_controller[97701]: 2025-12-10T20:13:42Z|00125|binding|INFO|Setting lport ac26be7d-6e8b-41ce-b924-41df4889751e up in Southbound
Dec 10 20:13:42 compute-0 NetworkManager[56238]: <info>  [1765397622.5481] device (tapac26be7d-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.548 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[3688f74e-47e8-40ff-b21e-5216ac949968]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.549 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa84a3a12-11 in ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 10 20:13:42 compute-0 NetworkManager[56238]: <info>  [1765397622.5585] device (tapac26be7d-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.557 239384 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa84a3a12-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.558 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[5637e0f9-d5b3-4c65-850a-c58a9010dcaa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.559 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[05033eb1-8fb6-4d65-b928-3808c978bca3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.573 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[523062e3-79af-44d2-ba25-d73dcc96a667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.591 189283 DEBUG oslo_concurrency.processutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa9kxtoxb" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.603 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[04be3b77-0d3e-4620-90de-577a5e0555b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.653 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[8533c23e-7913-427e-8fc9-ad416968adb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:42 compute-0 systemd-udevd[249965]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.666 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[49b41d20-247b-4446-9684-f7c78d32c937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:42 compute-0 NetworkManager[56238]: <info>  [1765397622.6680] manager: (tapa84a3a12-10): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Dec 10 20:13:42 compute-0 kernel: tap200d878c-fe: entered promiscuous mode
Dec 10 20:13:42 compute-0 ovn_controller[97701]: 2025-12-10T20:13:42Z|00126|binding|INFO|Claiming lport 200d878c-fe4b-43e4-bae3-5d660334bbc3 for this chassis.
Dec 10 20:13:42 compute-0 ovn_controller[97701]: 2025-12-10T20:13:42Z|00127|binding|INFO|200d878c-fe4b-43e4-bae3-5d660334bbc3: Claiming fa:16:3e:05:dd:d2 10.100.0.7
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.699 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:42 compute-0 NetworkManager[56238]: <info>  [1765397622.7037] manager: (tap200d878c-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Dec 10 20:13:42 compute-0 ovn_controller[97701]: 2025-12-10T20:13:42Z|00128|binding|INFO|Setting lport 200d878c-fe4b-43e4-bae3-5d660334bbc3 ovn-installed in OVS
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.712 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.717 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:42 compute-0 NetworkManager[56238]: <info>  [1765397622.7254] device (tap200d878c-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 20:13:42 compute-0 NetworkManager[56238]: <info>  [1765397622.7260] device (tap200d878c-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.725 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[ed19eb9f-cd88-46d4-bca1-b6f3251961ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.733 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[21131126-9e93-4005-affe-fb1e05eb72ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:42 compute-0 systemd-machined[155642]: New machine qemu-12-instance-0000000b.
Dec 10 20:13:42 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Dec 10 20:13:42 compute-0 NetworkManager[56238]: <info>  [1765397622.7620] device (tapa84a3a12-10): carrier: link connected
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.773 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3fc0ba-e190-49ea-ba28-83629dbc642e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:42 compute-0 ovn_controller[97701]: 2025-12-10T20:13:42Z|00129|binding|INFO|Setting lport 200d878c-fe4b-43e4-bae3-5d660334bbc3 up in Southbound
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.804 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[2665a502-4ba1-4e33-a76b-86366e62e641]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa84a3a12-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:11:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495141, 'reachable_time': 30542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250011, 'error': None, 'target': 'ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.809 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:dd:d2 10.100.0.7'], port_security=['fa:16:3e:05:dd:d2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1de5d51a-1c96-47d3-9e57-500874113cc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d2be28c-5f23-435e-b8fc-cc5d72257618', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8eb26407c54625b02b8a9d59d7c0db', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5627a2d3-cce8-4191-b32b-6955bcfdde6b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a0293b4-9385-4065-be4d-3094819c09e0, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=200d878c-fe4b-43e4-bae3-5d660334bbc3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.843 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[50c21223-a466-42c1-a985-41b710dd2850]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:11e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495141, 'tstamp': 495141}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250014, 'error': None, 'target': 'ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.873 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[651af839-c408-4c9e-beeb-7088c0ad29d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa84a3a12-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:11:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495141, 'reachable_time': 30542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250024, 'error': None, 'target': 'ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:42 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:42.922 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[cbcfa9fc-e2f2-4393-8ffa-869ddc1db768]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.947 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397622.9468424, 47d38c42-e665-400f-831e-4bb560cd5fdb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.948 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] VM Started (Lifecycle Event)
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.986 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.993 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397622.9473119, 47d38c42-e665-400f-831e-4bb560cd5fdb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:42 compute-0 nova_compute[189279]: 2025-12-10 20:13:42.993 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] VM Paused (Lifecycle Event)
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.013 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.021 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.030 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0ea73d-e511-440b-9d2d-d277aa6716ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.032 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa84a3a12-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.032 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.033 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa84a3a12-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:43 compute-0 kernel: tapa84a3a12-10: entered promiscuous mode
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.035 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.037 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:43 compute-0 NetworkManager[56238]: <info>  [1765397623.0390] manager: (tapa84a3a12-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.040 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:13:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:43.041 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1978 Content-Type: application/json Date: Wed, 10 Dec 2025 20:13:42 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-2cdf3c79-cc8e-4aa3-9dd9-e71fd4dc2b11 x-openstack-request-id: req-2cdf3c79-cc8e-4aa3-9dd9-e71fd4dc2b11 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 10 20:13:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:43.042 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "63639261-d8d9-46e1-8b3f-55af36a85e58", "name": "tempest-ServerActionsTestJSON-server-1460650199", "status": "ACTIVE", "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "user_id": "0c9cd4059c654dd4947e252e9f3acf85", "metadata": {}, "hostId": "ce4d54eedd76be282ae0c875d05f97d3278ad456e45d4f1da092ac6d", "image": {"id": "33b11153-486b-4d32-bc63-6b6a6ed0b704", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/33b11153-486b-4d32-bc63-6b6a6ed0b704"}]}, "flavor": {"id": "e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4"}]}, "created": "2025-12-10T20:12:48Z", "updated": "2025-12-10T20:13:04Z", "addresses": {"tempest-ServerActionsTestJSON-822085889-network": [{"version": 4, "addr": "10.100.0.8", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f8:b0:0b"}, {"version": 4, "addr": "192.168.122.244", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f8:b0:0b"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/63639261-d8d9-46e1-8b3f-55af36a85e58"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/63639261-d8d9-46e1-8b3f-55af36a85e58"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-71097797", "OS-SRV-USG:launched_at": "2025-12-10T20:13:04.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1345580746"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 10 20:13:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:43.042 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/63639261-d8d9-46e1-8b3f-55af36a85e58 used request id req-2cdf3c79-cc8e-4aa3-9dd9-e71fd4dc2b11 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.042 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa84a3a12-10, col_values=(('external_ids', {'iface-id': '86da816f-df66-4db4-acdb-d073547d11bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:43.045 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '63639261-d8d9-46e1-8b3f-55af36a85e58', 'name': 'tempest-ServerActionsTestJSON-server-1460650199', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '33b11153-486b-4d32-bc63-6b6a6ed0b704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2e63db29894648c7a06ef3bcb4b98768', 'user_id': '0c9cd4059c654dd4947e252e9f3acf85', 'hostId': 'ce4d54eedd76be282ae0c875d05f97d3278ad456e45d4f1da092ac6d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.045 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:43 compute-0 ovn_controller[97701]: 2025-12-10T20:13:43Z|00130|binding|INFO|Releasing lport 86da816f-df66-4db4-acdb-d073547d11bb from this chassis (sb_readonly=0)
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.047 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:43.048 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 47d38c42-e665-400f-831e-4bb560cd5fdb from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 10 20:13:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:43.048 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/47d38c42-e665-400f-831e-4bb560cd5fdb -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a6b7809c80638f6e016296d2f243706fded356213cefb5a3f70c31b120afa2c9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.049 106564 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a84a3a12-17fa-4570-b2cb-3daff5d43bee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a84a3a12-17fa-4570-b2cb-3daff5d43bee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.051 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[630a169a-fa54-4c27-bc6d-072f780c28e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.053 106564 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: global
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     log         /dev/log local0 debug
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     log-tag     haproxy-metadata-proxy-a84a3a12-17fa-4570-b2cb-3daff5d43bee
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     user        root
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     group       root
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     maxconn     1024
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     pidfile     /var/lib/neutron/external/pids/a84a3a12-17fa-4570-b2cb-3daff5d43bee.pid.haproxy
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     daemon
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: defaults
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     log global
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     mode http
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     option httplog
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     option dontlognull
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     option http-server-close
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     option forwardfor
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     retries                 3
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     timeout http-request    30s
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     timeout connect         30s
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     timeout client          32s
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     timeout server          32s
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     timeout http-keep-alive 30s
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: listen listener
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     bind 169.254.169.254:80
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     server metadata /var/lib/neutron/metadata_proxy
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:     http-request add-header X-OVN-Network-ID a84a3a12-17fa-4570-b2cb-3daff5d43bee
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.053 106564 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee', 'env', 'PROCESS_TAG=haproxy-a84a3a12-17fa-4570-b2cb-3daff5d43bee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a84a3a12-17fa-4570-b2cb-3daff5d43bee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.060 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.104 189283 DEBUG nova.network.neutron [req-de08d318-4355-407d-a5a1-e16b0a47231d req-35b653de-4f68-438e-88d5-b4135f025750 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Updated VIF entry in instance network info cache for port ac26be7d-6e8b-41ce-b924-41df4889751e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.105 189283 DEBUG nova.network.neutron [req-de08d318-4355-407d-a5a1-e16b0a47231d req-35b653de-4f68-438e-88d5-b4135f025750 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Updating instance_info_cache with network_info: [{"id": "ac26be7d-6e8b-41ce-b924-41df4889751e", "address": "fa:16:3e:3a:af:0b", "network": {"id": "a84a3a12-17fa-4570-b2cb-3daff5d43bee", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325546459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76049963481942ac8475b7a40994cc54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac26be7d-6e", "ovs_interfaceid": "ac26be7d-6e8b-41ce-b924-41df4889751e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.130 189283 DEBUG oslo_concurrency.lockutils [req-de08d318-4355-407d-a5a1-e16b0a47231d req-35b653de-4f68-438e-88d5-b4135f025750 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-47d38c42-e665-400f-831e-4bb560cd5fdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.136 189283 DEBUG nova.compute.manager [req-ef4f4634-b210-4ae4-8ee0-0a73befede4a req-de59a563-cbbe-4106-951c-54ad0df46675 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Received event network-vif-plugged-ac26be7d-6e8b-41ce-b924-41df4889751e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.136 189283 DEBUG oslo_concurrency.lockutils [req-ef4f4634-b210-4ae4-8ee0-0a73befede4a req-de59a563-cbbe-4106-951c-54ad0df46675 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.137 189283 DEBUG oslo_concurrency.lockutils [req-ef4f4634-b210-4ae4-8ee0-0a73befede4a req-de59a563-cbbe-4106-951c-54ad0df46675 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.137 189283 DEBUG oslo_concurrency.lockutils [req-ef4f4634-b210-4ae4-8ee0-0a73befede4a req-de59a563-cbbe-4106-951c-54ad0df46675 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.138 189283 DEBUG nova.compute.manager [req-ef4f4634-b210-4ae4-8ee0-0a73befede4a req-de59a563-cbbe-4106-951c-54ad0df46675 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Processing event network-vif-plugged-ac26be7d-6e8b-41ce-b924-41df4889751e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.138 189283 DEBUG nova.compute.manager [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.145 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.157 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397623.15138, 47d38c42-e665-400f-831e-4bb560cd5fdb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.157 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] VM Resumed (Lifecycle Event)
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.161 189283 INFO nova.virt.libvirt.driver [-] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Instance spawned successfully.
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.161 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.182 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.192 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.199 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.199 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.200 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.200 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.201 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.202 189283 DEBUG nova.virt.libvirt.driver [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.237 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.575 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397623.5675104, 1de5d51a-1c96-47d3-9e57-500874113cc5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.575 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] VM Started (Lifecycle Event)
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.577 189283 INFO nova.compute.manager [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Took 7.34 seconds to spawn the instance on the hypervisor.
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.578 189283 DEBUG nova.compute.manager [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:43 compute-0 podman[250065]: 2025-12-10 20:13:43.604050568 +0000 UTC m=+0.085500742 container create 94dba0ec2847537785655addedb650ef8e81858d1fd4e1be1d73e0d7e9aa7357 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.620 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.625 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397623.5737762, 1de5d51a-1c96-47d3-9e57-500874113cc5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.625 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] VM Paused (Lifecycle Event)
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.644 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:43 compute-0 podman[250065]: 2025-12-10 20:13:43.56011509 +0000 UTC m=+0.041565284 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.654 189283 INFO nova.compute.manager [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Took 8.09 seconds to build instance.
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.657 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:13:43 compute-0 systemd[1]: Started libpod-conmon-94dba0ec2847537785655addedb650ef8e81858d1fd4e1be1d73e0d7e9aa7357.scope.
Dec 10 20:13:43 compute-0 systemd[1]: Started libcrun container.
Dec 10 20:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e6449958400efca171ee75ac5d58f7b2861b7eaff2f9910b72cb3f9ca1669d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 10 20:13:43 compute-0 podman[250065]: 2025-12-10 20:13:43.749937421 +0000 UTC m=+0.231387625 container init 94dba0ec2847537785655addedb650ef8e81858d1fd4e1be1d73e0d7e9aa7357 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 10 20:13:43 compute-0 podman[250065]: 2025-12-10 20:13:43.767795644 +0000 UTC m=+0.249245818 container start 94dba0ec2847537785655addedb650ef8e81858d1fd4e1be1d73e0d7e9aa7357 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 10 20:13:43 compute-0 neutron-haproxy-ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee[250080]: [NOTICE]   (250084) : New worker (250086) forked
Dec 10 20:13:43 compute-0 neutron-haproxy-ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee[250080]: [NOTICE]   (250084) : Loading success.
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.902 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:13:43 compute-0 nova_compute[189279]: 2025-12-10 20:13:43.902 189283 DEBUG oslo_concurrency.lockutils [None req-bbdf6cb1-7db3-4a42-a309-11a740effda8 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.921 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 200d878c-fe4b-43e4-bae3-5d660334bbc3 in datapath 5d2be28c-5f23-435e-b8fc-cc5d72257618 unbound from our chassis
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.923 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d2be28c-5f23-435e-b8fc-cc5d72257618
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.933 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[607d0ed9-8cc7-4531-8015-35677321cd95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.934 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d2be28c-51 in ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.936 239384 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d2be28c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.936 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[21cb413a-9ba7-4f84-b4ef-17cde0c971a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.938 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[b41da2a0-9cab-4f22-9621-a30e019a3c66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.954 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[7a320f55-0f27-4bc8-ab7a-9505b758e741]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:43 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:43.971 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c5928514-4e4d-4679-b8f2-2c975749a647]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.003 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[a5964ae3-2393-4677-b365-9471069baf44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:44 compute-0 NetworkManager[56238]: <info>  [1765397624.0143] manager: (tap5d2be28c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.013 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[89a6363f-2a56-4c1a-996f-9d5d9f408fb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:44 compute-0 systemd-udevd[249998]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:13:44 compute-0 nova_compute[189279]: 2025-12-10 20:13:44.027 189283 DEBUG nova.network.neutron [req-5c387ffc-7b89-49ca-b5a1-fce5208e40b8 req-ce456379-5012-4cdf-bee1-fb2abcea1f7a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Updated VIF entry in instance network info cache for port 200d878c-fe4b-43e4-bae3-5d660334bbc3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:13:44 compute-0 nova_compute[189279]: 2025-12-10 20:13:44.027 189283 DEBUG nova.network.neutron [req-5c387ffc-7b89-49ca-b5a1-fce5208e40b8 req-ce456379-5012-4cdf-bee1-fb2abcea1f7a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Updating instance_info_cache with network_info: [{"id": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "address": "fa:16:3e:05:dd:d2", "network": {"id": "5d2be28c-5f23-435e-b8fc-cc5d72257618", "bridge": "br-int", "label": "tempest-ServersTestJSON-1378806211-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8eb26407c54625b02b8a9d59d7c0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap200d878c-fe", "ovs_interfaceid": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.063 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[9770cb50-b99e-40b3-a435-db386cf75fa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.066 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[9809a540-c87d-4d2f-a923-8e5e115c21e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:44 compute-0 NetworkManager[56238]: <info>  [1765397624.0944] device (tap5d2be28c-50): carrier: link connected
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.107 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4d8bef-63b1-41a6-a3cd-87a44a8d0cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.136 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[1f170dd5-382a-4f41-a577-3b3c10e3b5aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d2be28c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:5f:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495275, 'reachable_time': 24395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250105, 'error': None, 'target': 'ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.169 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e6494cda-57d9-4e71-b0ea-5607d99c32c2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe66:5f33'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495275, 'tstamp': 495275}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250106, 'error': None, 'target': 'ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.202 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[98a6ecfd-e472-4d9d-a1b2-7bd1c75d4aca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d2be28c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:5f:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495275, 'reachable_time': 24395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250107, 'error': None, 'target': 'ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.246 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[936d70bd-4068-4bd8-86d1-105682a644dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.309 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8e6e2b-fa53-437d-b332-a210c1f78124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.311 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d2be28c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.311 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.311 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d2be28c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:44 compute-0 nova_compute[189279]: 2025-12-10 20:13:44.313 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:44 compute-0 NetworkManager[56238]: <info>  [1765397624.3145] manager: (tap5d2be28c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Dec 10 20:13:44 compute-0 nova_compute[189279]: 2025-12-10 20:13:44.316 189283 DEBUG oslo_concurrency.lockutils [req-5c387ffc-7b89-49ca-b5a1-fce5208e40b8 req-ce456379-5012-4cdf-bee1-fb2abcea1f7a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-1de5d51a-1c96-47d3-9e57-500874113cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:13:44 compute-0 kernel: tap5d2be28c-50: entered promiscuous mode
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.325 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d2be28c-50, col_values=(('external_ids', {'iface-id': 'edcd1c97-30a1-42f5-b6cc-16c6959863ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:44 compute-0 ovn_controller[97701]: 2025-12-10T20:13:44Z|00131|binding|INFO|Releasing lport edcd1c97-30a1-42f5-b6cc-16c6959863ba from this chassis (sb_readonly=0)
Dec 10 20:13:44 compute-0 nova_compute[189279]: 2025-12-10 20:13:44.329 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:44 compute-0 nova_compute[189279]: 2025-12-10 20:13:44.344 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:44 compute-0 nova_compute[189279]: 2025-12-10 20:13:44.345 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.346 106564 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d2be28c-5f23-435e-b8fc-cc5d72257618.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d2be28c-5f23-435e-b8fc-cc5d72257618.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.347 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[07940059-6019-4c0e-81dc-9418cba3f75d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.348 106564 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: global
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     log         /dev/log local0 debug
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     log-tag     haproxy-metadata-proxy-5d2be28c-5f23-435e-b8fc-cc5d72257618
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     user        root
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     group       root
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     maxconn     1024
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     pidfile     /var/lib/neutron/external/pids/5d2be28c-5f23-435e-b8fc-cc5d72257618.pid.haproxy
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     daemon
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: defaults
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     log global
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     mode http
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     option httplog
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     option dontlognull
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     option http-server-close
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     option forwardfor
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     retries                 3
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     timeout http-request    30s
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     timeout connect         30s
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     timeout client          32s
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     timeout server          32s
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     timeout http-keep-alive 30s
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: listen listener
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     bind 169.254.169.254:80
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     server metadata /var/lib/neutron/metadata_proxy
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:     http-request add-header X-OVN-Network-ID 5d2be28c-5f23-435e-b8fc-cc5d72257618
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 10 20:13:44 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:44.349 106564 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618', 'env', 'PROCESS_TAG=haproxy-5d2be28c-5f23-435e-b8fc-cc5d72257618', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d2be28c-5f23-435e-b8fc-cc5d72257618.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 10 20:13:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:44.369 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1623 Content-Type: application/json Date: Wed, 10 Dec 2025 20:13:43 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-f106a63c-491b-42c8-a8d6-3a10e3dc66e7 x-openstack-request-id: req-f106a63c-491b-42c8-a8d6-3a10e3dc66e7 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 10 20:13:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:44.370 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "47d38c42-e665-400f-831e-4bb560cd5fdb", "name": "tempest-ServerAddressesTestJSON-server-849312876", "status": "BUILD", "tenant_id": "76049963481942ac8475b7a40994cc54", "user_id": "3f9ff7d7d145486fb37626518d98db5e", "metadata": {}, "hostId": "2d5b1d1def8eebc18060225f2f934642b0f2279612f1b8d1f61a20cf", "image": {"id": "33b11153-486b-4d32-bc63-6b6a6ed0b704", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/33b11153-486b-4d32-bc63-6b6a6ed0b704"}]}, "flavor": {"id": "e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4"}]}, "created": "2025-12-10T20:13:34Z", "updated": "2025-12-10T20:13:43Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/47d38c42-e665-400f-831e-4bb560cd5fdb"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/47d38c42-e665-400f-831e-4bb560cd5fdb"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "key_name": null, "OS-SRV-USG:launched_at": null, "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000c", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "spawning", "OS-EXT-STS:vm_state": "building", "OS-EXT-STS:power_state": 0, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 10 20:13:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:44.370 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/47d38c42-e665-400f-831e-4bb560cd5fdb used request id req-f106a63c-491b-42c8-a8d6-3a10e3dc66e7 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 10 20:13:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:44.371 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '47d38c42-e665-400f-831e-4bb560cd5fdb', 'name': 'tempest-ServerAddressesTestJSON-server-849312876', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '33b11153-486b-4d32-bc63-6b6a6ed0b704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '76049963481942ac8475b7a40994cc54', 'user_id': '3f9ff7d7d145486fb37626518d98db5e', 'hostId': '2d5b1d1def8eebc18060225f2f934642b0f2279612f1b8d1f61a20cf', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:13:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:44.378 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 81f60881-4334-4ede-a10d-454a7e8a4154 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 10 20:13:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:44.382 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/81f60881-4334-4ede-a10d-454a7e8a4154 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a6b7809c80638f6e016296d2f243706fded356213cefb5a3f70c31b120afa2c9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 10 20:13:44 compute-0 podman[250137]: 2025-12-10 20:13:44.825711208 +0000 UTC m=+0.077756622 container create 5b8c0124d2cfd55163f1570535d89464b4447c43c11f5928dd346670a9027979 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 10 20:13:44 compute-0 systemd[1]: Started libpod-conmon-5b8c0124d2cfd55163f1570535d89464b4447c43c11f5928dd346670a9027979.scope.
Dec 10 20:13:44 compute-0 podman[250137]: 2025-12-10 20:13:44.788373139 +0000 UTC m=+0.040418613 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 10 20:13:44 compute-0 systemd[1]: Started libcrun container.
Dec 10 20:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9391a3f87281ca579fe4ab0bbcf2bf8ff1467dab138ee38ef56319e14dfec0e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 10 20:13:44 compute-0 podman[250137]: 2025-12-10 20:13:44.938181629 +0000 UTC m=+0.190227053 container init 5b8c0124d2cfd55163f1570535d89464b4447c43c11f5928dd346670a9027979 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:13:44 compute-0 podman[250137]: 2025-12-10 20:13:44.947062438 +0000 UTC m=+0.199107862 container start 5b8c0124d2cfd55163f1570535d89464b4447c43c11f5928dd346670a9027979 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Dec 10 20:13:44 compute-0 neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618[250153]: [NOTICE]   (250157) : New worker (250159) forked
Dec 10 20:13:44 compute-0 neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618[250153]: [NOTICE]   (250157) : Loading success.
Dec 10 20:13:45 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:45.354 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1995 Content-Type: application/json Date: Wed, 10 Dec 2025 20:13:44 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-48279a79-5a5b-4121-9495-d4e45730b3b9 x-openstack-request-id: req-48279a79-5a5b-4121-9495-d4e45730b3b9 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 10 20:13:45 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:45.357 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "81f60881-4334-4ede-a10d-454a7e8a4154", "name": "tempest-AttachInterfacesUnderV243Test-server-626488523", "status": "ACTIVE", "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "user_id": "9901235a2b1b4cf4b7a0d6fd53dd0396", "metadata": {}, "hostId": "320f4df9ec5d9bfeb2597ef006885f33fb5c08d3580713400e9ae9f3", "image": {"id": "33b11153-486b-4d32-bc63-6b6a6ed0b704", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/33b11153-486b-4d32-bc63-6b6a6ed0b704"}]}, "flavor": {"id": "e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4"}]}, "created": "2025-12-10T20:12:51Z", "updated": "2025-12-10T20:13:02Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-1692550304-network": [{"version": 4, "addr": "10.100.0.11", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:cb:c2:44"}, {"version": 4, "addr": "192.168.122.191", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:cb:c2:44"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/81f60881-4334-4ede-a10d-454a7e8a4154"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/81f60881-4334-4ede-a10d-454a7e8a4154"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-945515570", "OS-SRV-USG:launched_at": "2025-12-10T20:13:02.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--248416232"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 10 20:13:45 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:45.358 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/81f60881-4334-4ede-a10d-454a7e8a4154 used request id req-48279a79-5a5b-4121-9495-d4e45730b3b9 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 10 20:13:45 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:45.363 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '81f60881-4334-4ede-a10d-454a7e8a4154', 'name': 'tempest-AttachInterfacesUnderV243Test-server-626488523', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '33b11153-486b-4d32-bc63-6b6a6ed0b704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2505343710a74a61bea5fcb849a4b61b', 'user_id': '9901235a2b1b4cf4b7a0d6fd53dd0396', 'hostId': '320f4df9ec5d9bfeb2597ef006885f33fb5c08d3580713400e9ae9f3', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:13:45 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:45.369 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a4a66175-57ff-48da-8473-e93f72da4499 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 10 20:13:45 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:45.376 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a4a66175-57ff-48da-8473-e93f72da4499 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a6b7809c80638f6e016296d2f243706fded356213cefb5a3f70c31b120afa2c9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.807 189283 DEBUG nova.compute.manager [req-6e8fcc41-b8aa-4c4f-944b-112c0191efc3 req-5327b9f0-f446-4a20-80ee-caee8a23893e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Received event network-vif-plugged-ac26be7d-6e8b-41ce-b924-41df4889751e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.807 189283 DEBUG oslo_concurrency.lockutils [req-6e8fcc41-b8aa-4c4f-944b-112c0191efc3 req-5327b9f0-f446-4a20-80ee-caee8a23893e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.807 189283 DEBUG oslo_concurrency.lockutils [req-6e8fcc41-b8aa-4c4f-944b-112c0191efc3 req-5327b9f0-f446-4a20-80ee-caee8a23893e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.808 189283 DEBUG oslo_concurrency.lockutils [req-6e8fcc41-b8aa-4c4f-944b-112c0191efc3 req-5327b9f0-f446-4a20-80ee-caee8a23893e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.808 189283 DEBUG nova.compute.manager [req-6e8fcc41-b8aa-4c4f-944b-112c0191efc3 req-5327b9f0-f446-4a20-80ee-caee8a23893e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] No waiting events found dispatching network-vif-plugged-ac26be7d-6e8b-41ce-b924-41df4889751e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.808 189283 WARNING nova.compute.manager [req-6e8fcc41-b8aa-4c4f-944b-112c0191efc3 req-5327b9f0-f446-4a20-80ee-caee8a23893e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Received unexpected event network-vif-plugged-ac26be7d-6e8b-41ce-b924-41df4889751e for instance with vm_state active and task_state None.
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.808 189283 DEBUG nova.compute.manager [req-6e8fcc41-b8aa-4c4f-944b-112c0191efc3 req-5327b9f0-f446-4a20-80ee-caee8a23893e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Received event network-vif-plugged-200d878c-fe4b-43e4-bae3-5d660334bbc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.808 189283 DEBUG oslo_concurrency.lockutils [req-6e8fcc41-b8aa-4c4f-944b-112c0191efc3 req-5327b9f0-f446-4a20-80ee-caee8a23893e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.809 189283 DEBUG oslo_concurrency.lockutils [req-6e8fcc41-b8aa-4c4f-944b-112c0191efc3 req-5327b9f0-f446-4a20-80ee-caee8a23893e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.809 189283 DEBUG oslo_concurrency.lockutils [req-6e8fcc41-b8aa-4c4f-944b-112c0191efc3 req-5327b9f0-f446-4a20-80ee-caee8a23893e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.809 189283 DEBUG nova.compute.manager [req-6e8fcc41-b8aa-4c4f-944b-112c0191efc3 req-5327b9f0-f446-4a20-80ee-caee8a23893e 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Processing event network-vif-plugged-200d878c-fe4b-43e4-bae3-5d660334bbc3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.809 189283 DEBUG nova.compute.manager [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.823 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397625.8182955, 1de5d51a-1c96-47d3-9e57-500874113cc5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.824 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] VM Resumed (Lifecycle Event)
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.827 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.834 189283 INFO nova.virt.libvirt.driver [-] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Instance spawned successfully.
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.834 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.855 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.876 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.888 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.888 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.889 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.889 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.890 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.890 189283 DEBUG nova.virt.libvirt.driver [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.913 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.995 189283 INFO nova.compute.manager [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Took 10.60 seconds to spawn the instance on the hypervisor.
Dec 10 20:13:45 compute-0 nova_compute[189279]: 2025-12-10 20:13:45.996 189283 DEBUG nova.compute.manager [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:13:46 compute-0 nova_compute[189279]: 2025-12-10 20:13:46.096 189283 INFO nova.compute.manager [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Took 11.13 seconds to build instance.
Dec 10 20:13:46 compute-0 nova_compute[189279]: 2025-12-10 20:13:46.118 189283 DEBUG oslo_concurrency.lockutils [None req-c9af300e-a4b9-4cdf-a18f-1e0cf733b7f8 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:46 compute-0 podman[250168]: 2025-12-10 20:13:46.138783229 +0000 UTC m=+0.119770127 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 10 20:13:46 compute-0 podman[250169]: 2025-12-10 20:13:46.1399017 +0000 UTC m=+0.116726177 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:13:46 compute-0 podman[250170]: 2025-12-10 20:13:46.156156249 +0000 UTC m=+0.120816067 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, name=ubi9, maintainer=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 10 20:13:46 compute-0 nova_compute[189279]: 2025-12-10 20:13:46.718 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:46 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:46.785 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1975 Content-Type: application/json Date: Wed, 10 Dec 2025 20:13:45 GMT Keep-Alive: timeout=5, max=97 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-2a4b57b2-fe5a-4f6b-9bb3-ef3827aeeaa2 x-openstack-request-id: req-2a4b57b2-fe5a-4f6b-9bb3-ef3827aeeaa2 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 10 20:13:46 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:46.785 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a4a66175-57ff-48da-8473-e93f72da4499", "name": "tempest-TestNetworkBasicOps-server-1430019440", "status": "ACTIVE", "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "user_id": "598a18069aae495194ab1b43958530aa", "metadata": {}, "hostId": "77ed2eae73f773c2c7c4aa81929b0432503c5b154e2d50129edc86d1", "image": {"id": "33b11153-486b-4d32-bc63-6b6a6ed0b704", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/33b11153-486b-4d32-bc63-6b6a6ed0b704"}]}, "flavor": {"id": "e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4"}]}, "created": "2025-12-10T20:13:07Z", "updated": "2025-12-10T20:13:18Z", "addresses": {"tempest-network-smoke--2109787748": [{"version": 4, "addr": "10.100.0.14", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a8:ab:64"}, {"version": 4, "addr": "192.168.122.187", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a8:ab:64"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a4a66175-57ff-48da-8473-e93f72da4499"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a4a66175-57ff-48da-8473-e93f72da4499"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-103146956", "OS-SRV-USG:launched_at": "2025-12-10T20:13:18.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-307236102"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 10 20:13:46 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:46.786 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a4a66175-57ff-48da-8473-e93f72da4499 used request id req-2a4b57b2-fe5a-4f6b-9bb3-ef3827aeeaa2 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 10 20:13:46 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:46.788 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a4a66175-57ff-48da-8473-e93f72da4499', 'name': 'tempest-TestNetworkBasicOps-server-1430019440', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '33b11153-486b-4d32-bc63-6b6a6ed0b704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8a51cea6d1cb40c383b87a400100e902', 'user_id': '598a18069aae495194ab1b43958530aa', 'hostId': '77ed2eae73f773c2c7c4aa81929b0432503c5b154e2d50129edc86d1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:13:46 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:46.791 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1de5d51a-1c96-47d3-9e57-500874113cc5 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 10 20:13:46 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:46.792 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1de5d51a-1c96-47d3-9e57-500874113cc5 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a6b7809c80638f6e016296d2f243706fded356213cefb5a3f70c31b120afa2c9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 10 20:13:46 compute-0 nova_compute[189279]: 2025-12-10 20:13:46.976 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.466 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1882 Content-Type: application/json Date: Wed, 10 Dec 2025 20:13:46 GMT Keep-Alive: timeout=5, max=96 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-21012f48-6d45-45f6-9133-15940062dd68 x-openstack-request-id: req-21012f48-6d45-45f6-9133-15940062dd68 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.467 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1de5d51a-1c96-47d3-9e57-500874113cc5", "name": "tempest-ServersTestJSON-server-633682198", "status": "ACTIVE", "tenant_id": "fd8eb26407c54625b02b8a9d59d7c0db", "user_id": "95e701c408554b41bc92928902567588", "metadata": {"hello": "world"}, "hostId": "e991233ecc3e5e2b8d35d0973b3dea5551cc2977bff98036d82becf0", "image": {"id": "33b11153-486b-4d32-bc63-6b6a6ed0b704", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/33b11153-486b-4d32-bc63-6b6a6ed0b704"}]}, "flavor": {"id": "e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4"}]}, "created": "2025-12-10T20:13:34Z", "updated": "2025-12-10T20:13:46Z", "addresses": {"tempest-ServersTestJSON-1378806211-network": [{"version": 4, "addr": "10.100.0.7", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:05:dd:d2"}]}, "accessIPv4": "1.1.1.1", "accessIPv6": "::babe:dc0c:1602", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1de5d51a-1c96-47d3-9e57-500874113cc5"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1de5d51a-1c96-47d3-9e57-500874113cc5"}], "OS-DCF:diskConfig": "AUTO", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-887805161", "OS-SRV-USG:launched_at": "2025-12-10T20:13:46.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--2114323085"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.468 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1de5d51a-1c96-47d3-9e57-500874113cc5 used request id req-21012f48-6d45-45f6-9133-15940062dd68 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.469 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1de5d51a-1c96-47d3-9e57-500874113cc5', 'name': 'tempest-ServersTestJSON-server-633682198', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '33b11153-486b-4d32-bc63-6b6a6ed0b704'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'fd8eb26407c54625b02b8a9d59d7c0db', 'user_id': '95e701c408554b41bc92928902567588', 'hostId': 'e991233ecc3e5e2b8d35d0973b3dea5551cc2977bff98036d82becf0', 'status': 'active', 'metadata': {'hello': 'world'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.470 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.470 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.471 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.473 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.474 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.474 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.475 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.475 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T20:13:47.471844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.475 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.476 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.477 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T20:13:47.476025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.494 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.495 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.511 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.512 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.532 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.532 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.551 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.551 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.578 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.579 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.581 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T20:13:47.581631) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.583 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T20:13:47.584255) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.589 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 63639261-d8d9-46e1-8b3f-55af36a85e58 / tapa0f4e290-5b inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.590 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.594 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 47d38c42-e665-400f-831e-4bb560cd5fdb / tapac26be7d-6e inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.594 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.598 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 81f60881-4334-4ede-a10d-454a7e8a4154 / tap42ea5f6d-dd inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.598 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.603 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for a4a66175-57ff-48da-8473-e93f72da4499 / tap3ae03bc4-72 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.603 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.607 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1de5d51a-1c96-47d3-9e57-500874113cc5 / tap200d878c-fe inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.608 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.609 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.609 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.609 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.610 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.610 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.610 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.610 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.611 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.612 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.612 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.613 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.613 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T20:13:47.610370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.614 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.615 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.615 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.615 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.616 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T20:13:47.616133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.616 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.617 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.617 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.618 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.618 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.619 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.619 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.619 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.620 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.620 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.620 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T20:13:47.620510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.621 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/network.outgoing.bytes volume: 1550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.621 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.621 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/network.outgoing.bytes volume: 1550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.622 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.622 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.623 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.623 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.624 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.624 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.624 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.624 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T20:13:47.624198) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.624 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.625 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.625 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.626 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.626 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.627 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.628 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.628 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.629 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T20:13:47.628633) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.663 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/memory.usage volume: 46.8515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.688 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.688 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 47d38c42-e665-400f-831e-4bb560cd5fdb: ceilometer.compute.pollsters.NoVolumeException
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.716 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/memory.usage volume: 46.609375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.751 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/memory.usage volume: 40.46875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.808 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.809 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 1de5d51a-1c96-47d3-9e57-500874113cc5: ceilometer.compute.pollsters.NoVolumeException
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.809 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.810 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.810 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.810 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.810 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.810 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.810 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-10T20:13:47.810450) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.811 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1460650199>, <NovaLikeServer: tempest-ServerAddressesTestJSON-server-849312876>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-626488523>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1430019440>, <NovaLikeServer: tempest-ServersTestJSON-server-633682198>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1460650199>, <NovaLikeServer: tempest-ServerAddressesTestJSON-server-849312876>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-626488523>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1430019440>, <NovaLikeServer: tempest-ServersTestJSON-server-633682198>]
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.811 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.811 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.811 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.812 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.812 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/network.incoming.bytes volume: 1796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.815 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.816 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/network.incoming.bytes volume: 1796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.816 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.817 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.818 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T20:13:47.812084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.819 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.819 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.820 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.820 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T20:13:47.820758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.821 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.822 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.822 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.823 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.824 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.allocation volume: 30679040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.824 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.825 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.826 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.826 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.827 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.829 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.830 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T20:13:47.830520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.830 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.831 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/network.outgoing.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.831 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.832 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/network.outgoing.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.833 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.834 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.834 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.835 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.835 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.837 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.837 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T20:13:47.837105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.837 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.838 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.838 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.839 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.840 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.841 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.841 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.842 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.842 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.842 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.843 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.844 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.845 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.845 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.846 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.847 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.847 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.848 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.848 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.848 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.849 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T20:13:47.842846) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.849 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T20:13:47.848437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.902 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.read.bytes volume: 30714368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:47.903 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:47 compute-0 nova_compute[189279]: 2025-12-10 20:13:47.924 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:47 compute-0 nova_compute[189279]: 2025-12-10 20:13:47.947 189283 DEBUG oslo_concurrency.lockutils [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Acquiring lock "47d38c42-e665-400f-831e-4bb560cd5fdb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:47 compute-0 nova_compute[189279]: 2025-12-10 20:13:47.947 189283 DEBUG oslo_concurrency.lockutils [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:47 compute-0 nova_compute[189279]: 2025-12-10 20:13:47.947 189283 DEBUG oslo_concurrency.lockutils [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Acquiring lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:47 compute-0 nova_compute[189279]: 2025-12-10 20:13:47.948 189283 DEBUG oslo_concurrency.lockutils [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:47 compute-0 nova_compute[189279]: 2025-12-10 20:13:47.948 189283 DEBUG oslo_concurrency.lockutils [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:47 compute-0 nova_compute[189279]: 2025-12-10 20:13:47.949 189283 INFO nova.compute.manager [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Terminating instance
Dec 10 20:13:47 compute-0 nova_compute[189279]: 2025-12-10 20:13:47.950 189283 DEBUG nova.compute.manager [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:13:47 compute-0 kernel: tapac26be7d-6e (unregistering): left promiscuous mode
Dec 10 20:13:47 compute-0 NetworkManager[56238]: <info>  [1765397627.9900] device (tapac26be7d-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:47.998 189283 DEBUG nova.compute.manager [req-51e4c2d5-c8ac-4112-85eb-11148230afa1 req-ae0374aa-0328-43e1-8f52-e42aa3c3fff5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Received event network-vif-plugged-200d878c-fe4b-43e4-bae3-5d660334bbc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:47.998 189283 DEBUG oslo_concurrency.lockutils [req-51e4c2d5-c8ac-4112-85eb-11148230afa1 req-ae0374aa-0328-43e1-8f52-e42aa3c3fff5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:47.998 189283 DEBUG oslo_concurrency.lockutils [req-51e4c2d5-c8ac-4112-85eb-11148230afa1 req-ae0374aa-0328-43e1-8f52-e42aa3c3fff5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:47.999 189283 DEBUG oslo_concurrency.lockutils [req-51e4c2d5-c8ac-4112-85eb-11148230afa1 req-ae0374aa-0328-43e1-8f52-e42aa3c3fff5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:47.999 189283 DEBUG nova.compute.manager [req-51e4c2d5-c8ac-4112-85eb-11148230afa1 req-ae0374aa-0328-43e1-8f52-e42aa3c3fff5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] No waiting events found dispatching network-vif-plugged-200d878c-fe4b-43e4-bae3-5d660334bbc3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:47.999 189283 WARNING nova.compute.manager [req-51e4c2d5-c8ac-4112-85eb-11148230afa1 req-ae0374aa-0328-43e1-8f52-e42aa3c3fff5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Received unexpected event network-vif-plugged-200d878c-fe4b-43e4-bae3-5d660334bbc3 for instance with vm_state active and task_state None.
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.012 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:48 compute-0 ovn_controller[97701]: 2025-12-10T20:13:48Z|00132|binding|INFO|Releasing lport ac26be7d-6e8b-41ce-b924-41df4889751e from this chassis (sb_readonly=0)
Dec 10 20:13:48 compute-0 ovn_controller[97701]: 2025-12-10T20:13:48Z|00133|binding|INFO|Setting lport ac26be7d-6e8b-41ce-b924-41df4889751e down in Southbound
Dec 10 20:13:48 compute-0 ovn_controller[97701]: 2025-12-10T20:13:48Z|00134|binding|INFO|Removing iface tapac26be7d-6e ovn-installed in OVS
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.019 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:48 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:48.022 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:af:0b 10.100.0.5'], port_security=['fa:16:3e:3a:af:0b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '47d38c42-e665-400f-831e-4bb560cd5fdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a84a3a12-17fa-4570-b2cb-3daff5d43bee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76049963481942ac8475b7a40994cc54', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fe177918-79b5-4e8a-b8fc-7103c8813c05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9861aac3-63df-40e8-b1d9-ec52094621a8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=ac26be7d-6e8b-41ce-b924-41df4889751e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:13:48 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:48.024 106564 INFO neutron.agent.ovn.metadata.agent [-] Port ac26be7d-6e8b-41ce-b924-41df4889751e in datapath a84a3a12-17fa-4570-b2cb-3daff5d43bee unbound from our chassis
Dec 10 20:13:48 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:48.029 106564 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a84a3a12-17fa-4570-b2cb-3daff5d43bee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.030 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:48 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:48.034 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[653e1dff-6c0d-4b28-bc2d-00e8feac59a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:48 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:48.036 106564 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee namespace which is not needed anymore
Dec 10 20:13:48 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec 10 20:13:48 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000c.scope: Consumed 5.240s CPU time.
Dec 10 20:13:48 compute-0 systemd-machined[155642]: Machine qemu-11-instance-0000000c terminated.
Dec 10 20:13:48 compute-0 virtqemud[188902]: Unable to read from monitor: Connection reset by peer
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: libvirt: QEMU Driver error : Unable to read from monitor: Connection reset by peer
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.162 14 WARNING ceilometer.compute.virt.libvirt.inspector [-] Error from libvirt while checking blockStats, This may not be harmful, but please check : Unable to read from monitor: Connection reset by peer: libvirt.libvirtError: Unable to read from monitor: Connection reset by peer
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.163 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.185 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.193 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.240 189283 INFO nova.virt.libvirt.driver [-] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Instance destroyed successfully.
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.240 189283 DEBUG nova.objects.instance [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lazy-loading 'resources' on Instance uuid 47d38c42-e665-400f-831e-4bb560cd5fdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.251 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.read.bytes volume: 30304768 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.251 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.271 189283 DEBUG nova.virt.libvirt.vif [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-849312876',display_name='tempest-ServerAddressesTestJSON-server-849312876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-849312876',id=12,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-10T20:13:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='76049963481942ac8475b7a40994cc54',ramdisk_id='',reservation_id='r-noeicbi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-421083304',owner_user_name='tempest-ServerAddressesTestJSON-421083304-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T20:13:43Z,user_data=None,user_id='3f9ff7d7d145486fb37626518d98db5e',uuid=47d38c42-e665-400f-831e-4bb560cd5fdb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac26be7d-6e8b-41ce-b924-41df4889751e", "address": "fa:16:3e:3a:af:0b", "network": {"id": "a84a3a12-17fa-4570-b2cb-3daff5d43bee", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325546459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76049963481942ac8475b7a40994cc54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac26be7d-6e", "ovs_interfaceid": "ac26be7d-6e8b-41ce-b924-41df4889751e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.272 189283 DEBUG nova.network.os_vif_util [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Converting VIF {"id": "ac26be7d-6e8b-41ce-b924-41df4889751e", "address": "fa:16:3e:3a:af:0b", "network": {"id": "a84a3a12-17fa-4570-b2cb-3daff5d43bee", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325546459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76049963481942ac8475b7a40994cc54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac26be7d-6e", "ovs_interfaceid": "ac26be7d-6e8b-41ce-b924-41df4889751e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.275 189283 DEBUG nova.network.os_vif_util [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:af:0b,bridge_name='br-int',has_traffic_filtering=True,id=ac26be7d-6e8b-41ce-b924-41df4889751e,network=Network(a84a3a12-17fa-4570-b2cb-3daff5d43bee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac26be7d-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.276 189283 DEBUG os_vif [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:af:0b,bridge_name='br-int',has_traffic_filtering=True,id=ac26be7d-6e8b-41ce-b924-41df4889751e,network=Network(a84a3a12-17fa-4570-b2cb-3daff5d43bee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac26be7d-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.278 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.278 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac26be7d-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.280 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.283 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.285 189283 INFO os_vif [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:af:0b,bridge_name='br-int',has_traffic_filtering=True,id=ac26be7d-6e8b-41ce-b924-41df4889751e,network=Network(a84a3a12-17fa-4570-b2cb-3daff5d43bee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac26be7d-6e')
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.285 189283 INFO nova.virt.libvirt.driver [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Deleting instance files /var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb_del
Dec 10 20:13:48 compute-0 neutron-haproxy-ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee[250080]: [NOTICE]   (250084) : haproxy version is 2.8.14-c23fe91
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.286 189283 INFO nova.virt.libvirt.driver [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Deletion of /var/lib/nova/instances/47d38c42-e665-400f-831e-4bb560cd5fdb_del complete
Dec 10 20:13:48 compute-0 neutron-haproxy-ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee[250080]: [NOTICE]   (250084) : path to executable is /usr/sbin/haproxy
Dec 10 20:13:48 compute-0 neutron-haproxy-ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee[250080]: [ALERT]    (250084) : Current worker (250086) exited with code 143 (Terminated)
Dec 10 20:13:48 compute-0 neutron-haproxy-ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee[250080]: [WARNING]  (250084) : All workers exited. Exiting... (0)
Dec 10 20:13:48 compute-0 systemd[1]: libpod-94dba0ec2847537785655addedb650ef8e81858d1fd4e1be1d73e0d7e9aa7357.scope: Deactivated successfully.
Dec 10 20:13:48 compute-0 podman[250244]: 2025-12-10 20:13:48.299995336 +0000 UTC m=+0.123267443 container died 94dba0ec2847537785655addedb650ef8e81858d1fd4e1be1d73e0d7e9aa7357 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.309 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.310 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-94dba0ec2847537785655addedb650ef8e81858d1fd4e1be1d73e0d7e9aa7357-userdata-shm.mount: Deactivated successfully.
Dec 10 20:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4e6449958400efca171ee75ac5d58f7b2861b7eaff2f9910b72cb3f9ca1669d-merged.mount: Deactivated successfully.
Dec 10 20:13:48 compute-0 podman[250244]: 2025-12-10 20:13:48.358901068 +0000 UTC m=+0.182173175 container cleanup 94dba0ec2847537785655addedb650ef8e81858d1fd4e1be1d73e0d7e9aa7357 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.357 189283 INFO nova.compute.manager [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Took 0.41 seconds to destroy the instance on the hypervisor.
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.358 189283 DEBUG oslo.service.loopingcall [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.359 189283 DEBUG nova.compute.manager [-] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.359 189283 DEBUG nova.network.neutron [-] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.369 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.read.bytes volume: 21968896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.370 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.370 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.371 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.371 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.371 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.371 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/cpu volume: 33540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.372 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/cpu volume: 4300000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.371 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T20:13:48.371418) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.372 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/cpu volume: 34010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.372 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/cpu volume: 28740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.372 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/cpu volume: 1750000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.372 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.373 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.373 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.373 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.373 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.373 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.read.latency volume: 681200143 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.373 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.read.latency volume: 74526722 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.374 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/disk.device.read.latency volume: 407984128 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.374 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.read.latency volume: 776993192 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.374 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T20:13:48.373446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.374 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.read.latency volume: 125344953 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.374 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.read.latency volume: 535524470 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.375 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.read.latency volume: 1494270 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.375 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.read.latency volume: 589208051 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.375 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.read.latency volume: 1095570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.376 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.376 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.376 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.read.requests volume: 1108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T20:13:48.376395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.376 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.377 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.377 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.read.requests volume: 1090 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.377 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.377 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.377 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.378 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.read.requests volume: 704 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.378 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.378 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.378 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:48 compute-0 systemd[1]: libpod-conmon-94dba0ec2847537785655addedb650ef8e81858d1fd4e1be1d73e0d7e9aa7357.scope: Deactivated successfully.
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.378 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.380 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.380 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.380 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T20:13:48.380311) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.381 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.381 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.381 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.382 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.382 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.382 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.382 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.383 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.383 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.write.bytes volume: 72916992 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.383 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T20:13:48.383689) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.384 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.384 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.write.bytes volume: 72904704 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.384 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.385 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.385 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.385 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.385 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.386 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.387 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.387 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.387 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T20:13:48.387032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.387 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.388 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.388 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.388 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.388 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.388 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.388 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.389 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.write.latency volume: 4954814500 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.389 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T20:13:48.388796) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.389 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.389 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.389 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.write.latency volume: 3394677565 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.389 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.390 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.390 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.390 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.390 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.391 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.391 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.391 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.391 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.392 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.392 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.write.requests volume: 301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.392 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T20:13:48.391973) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.392 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.392 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.write.requests volume: 303 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.393 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.393 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.393 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.393 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.393 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.394 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.394 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.394 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.394 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.394 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.395 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.395 14 DEBUG ceilometer.compute.pollsters [-] 63639261-d8d9-46e1-8b3f-55af36a85e58/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.395 14 DEBUG ceilometer.compute.pollsters [-] 47d38c42-e665-400f-831e-4bb560cd5fdb/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.396 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T20:13:48.395482) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.396 14 DEBUG ceilometer.compute.pollsters [-] 81f60881-4334-4ede-a10d-454a7e8a4154/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.396 14 DEBUG ceilometer.compute.pollsters [-] a4a66175-57ff-48da-8473-e93f72da4499/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.396 14 DEBUG ceilometer.compute.pollsters [-] 1de5d51a-1c96-47d3-9e57-500874113cc5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.397 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.397 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.397 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.397 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.397 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-10T20:13:48.397491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.397 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.397 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1460650199>, <NovaLikeServer: tempest-ServerAddressesTestJSON-server-849312876>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-626488523>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1430019440>, <NovaLikeServer: tempest-ServersTestJSON-server-633682198>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1460650199>, <NovaLikeServer: tempest-ServerAddressesTestJSON-server-849312876>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-626488523>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1430019440>, <NovaLikeServer: tempest-ServersTestJSON-server-633682198>]
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:13:48.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:13:48 compute-0 podman[250286]: 2025-12-10 20:13:48.443697649 +0000 UTC m=+0.053776514 container remove 94dba0ec2847537785655addedb650ef8e81858d1fd4e1be1d73e0d7e9aa7357 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:13:48 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:48.451 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[ff00e085-b4cc-4e4c-948d-630188b5e896]: (4, ('Wed Dec 10 08:13:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee (94dba0ec2847537785655addedb650ef8e81858d1fd4e1be1d73e0d7e9aa7357)\n94dba0ec2847537785655addedb650ef8e81858d1fd4e1be1d73e0d7e9aa7357\nWed Dec 10 08:13:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee (94dba0ec2847537785655addedb650ef8e81858d1fd4e1be1d73e0d7e9aa7357)\n94dba0ec2847537785655addedb650ef8e81858d1fd4e1be1d73e0d7e9aa7357\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:48 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:48.454 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[6f88d462-738f-4da6-87eb-d3b98a0da8d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:48 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:48.455 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa84a3a12-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.457 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:48 compute-0 kernel: tapa84a3a12-10: left promiscuous mode
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.470 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:48 compute-0 nova_compute[189279]: 2025-12-10 20:13:48.472 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:48 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:48.474 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[196ac59a-8844-4db5-8b2c-c3ccf5e95de1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:48 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:48.489 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[ed04c34d-6738-470c-b798-a1e8a057ecaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:48 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:48.491 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4fbe31-2dc2-4273-8837-e5e39bf0a92d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:48 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:48.507 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[97c1ed03-1ff1-4e88-a13c-ee99884d8abb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495130, 'reachable_time': 34152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250301, 'error': None, 'target': 'ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:48 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:48.511 106676 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a84a3a12-17fa-4570-b2cb-3daff5d43bee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 10 20:13:48 compute-0 systemd[1]: run-netns-ovnmeta\x2da84a3a12\x2d17fa\x2d4570\x2db2cb\x2d3daff5d43bee.mount: Deactivated successfully.
Dec 10 20:13:48 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:48.511 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[8dff4f45-ecef-4416-a1ad-23ede2de647d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.215 189283 DEBUG nova.network.neutron [-] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.244 189283 INFO nova.compute.manager [-] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Took 0.88 seconds to deallocate network for instance.
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.300 189283 DEBUG oslo_concurrency.lockutils [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.301 189283 DEBUG oslo_concurrency.lockutils [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.439 189283 DEBUG nova.compute.provider_tree [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.462 189283 DEBUG nova.scheduler.client.report [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.493 189283 DEBUG oslo_concurrency.lockutils [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.533 189283 INFO nova.scheduler.client.report [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Deleted allocations for instance 47d38c42-e665-400f-831e-4bb560cd5fdb
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.613 189283 DEBUG oslo_concurrency.lockutils [None req-d6666aa7-d5ee-4ceb-8a36-15fd556ce4c5 3f9ff7d7d145486fb37626518d98db5e 76049963481942ac8475b7a40994cc54 - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.979 189283 DEBUG oslo_concurrency.lockutils [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Acquiring lock "1de5d51a-1c96-47d3-9e57-500874113cc5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.980 189283 DEBUG oslo_concurrency.lockutils [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.980 189283 DEBUG oslo_concurrency.lockutils [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Acquiring lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.980 189283 DEBUG oslo_concurrency.lockutils [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.981 189283 DEBUG oslo_concurrency.lockutils [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.982 189283 INFO nova.compute.manager [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Terminating instance
Dec 10 20:13:49 compute-0 nova_compute[189279]: 2025-12-10 20:13:49.982 189283 DEBUG nova.compute.manager [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:13:50 compute-0 kernel: tap200d878c-fe (unregistering): left promiscuous mode
Dec 10 20:13:50 compute-0 NetworkManager[56238]: <info>  [1765397630.0158] device (tap200d878c-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.024 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:50 compute-0 ovn_controller[97701]: 2025-12-10T20:13:50Z|00135|binding|INFO|Releasing lport 200d878c-fe4b-43e4-bae3-5d660334bbc3 from this chassis (sb_readonly=0)
Dec 10 20:13:50 compute-0 ovn_controller[97701]: 2025-12-10T20:13:50Z|00136|binding|INFO|Setting lport 200d878c-fe4b-43e4-bae3-5d660334bbc3 down in Southbound
Dec 10 20:13:50 compute-0 ovn_controller[97701]: 2025-12-10T20:13:50Z|00137|binding|INFO|Removing iface tap200d878c-fe ovn-installed in OVS
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.030 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.042 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.043 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:dd:d2 10.100.0.7'], port_security=['fa:16:3e:05:dd:d2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1de5d51a-1c96-47d3-9e57-500874113cc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d2be28c-5f23-435e-b8fc-cc5d72257618', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8eb26407c54625b02b8a9d59d7c0db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5627a2d3-cce8-4191-b32b-6955bcfdde6b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a0293b4-9385-4065-be4d-3094819c09e0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=200d878c-fe4b-43e4-bae3-5d660334bbc3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.044 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 200d878c-fe4b-43e4-bae3-5d660334bbc3 in datapath 5d2be28c-5f23-435e-b8fc-cc5d72257618 unbound from our chassis
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.046 106564 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d2be28c-5f23-435e-b8fc-cc5d72257618, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.048 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[676c45f6-0692-4c70-92ad-769b774593dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.048 106564 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618 namespace which is not needed anymore
Dec 10 20:13:50 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec 10 20:13:50 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 4.868s CPU time.
Dec 10 20:13:50 compute-0 systemd-machined[155642]: Machine qemu-12-instance-0000000b terminated.
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.149 189283 DEBUG nova.compute.manager [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Received event network-vif-unplugged-ac26be7d-6e8b-41ce-b924-41df4889751e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.149 189283 DEBUG oslo_concurrency.lockutils [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.150 189283 DEBUG oslo_concurrency.lockutils [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.150 189283 DEBUG oslo_concurrency.lockutils [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.150 189283 DEBUG nova.compute.manager [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] No waiting events found dispatching network-vif-unplugged-ac26be7d-6e8b-41ce-b924-41df4889751e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.150 189283 WARNING nova.compute.manager [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Received unexpected event network-vif-unplugged-ac26be7d-6e8b-41ce-b924-41df4889751e for instance with vm_state deleted and task_state None.
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.151 189283 DEBUG nova.compute.manager [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Received event network-vif-plugged-ac26be7d-6e8b-41ce-b924-41df4889751e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.151 189283 DEBUG oslo_concurrency.lockutils [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.151 189283 DEBUG oslo_concurrency.lockutils [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.151 189283 DEBUG oslo_concurrency.lockutils [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "47d38c42-e665-400f-831e-4bb560cd5fdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.151 189283 DEBUG nova.compute.manager [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] No waiting events found dispatching network-vif-plugged-ac26be7d-6e8b-41ce-b924-41df4889751e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.152 189283 WARNING nova.compute.manager [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Received unexpected event network-vif-plugged-ac26be7d-6e8b-41ce-b924-41df4889751e for instance with vm_state deleted and task_state None.
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.152 189283 DEBUG nova.compute.manager [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Received event network-changed-200d878c-fe4b-43e4-bae3-5d660334bbc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.152 189283 DEBUG nova.compute.manager [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Refreshing instance network info cache due to event network-changed-200d878c-fe4b-43e4-bae3-5d660334bbc3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.152 189283 DEBUG oslo_concurrency.lockutils [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-1de5d51a-1c96-47d3-9e57-500874113cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.152 189283 DEBUG oslo_concurrency.lockutils [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-1de5d51a-1c96-47d3-9e57-500874113cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.153 189283 DEBUG nova.network.neutron [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Refreshing network info cache for port 200d878c-fe4b-43e4-bae3-5d660334bbc3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:13:50 compute-0 kernel: tap200d878c-fe: entered promiscuous mode
Dec 10 20:13:50 compute-0 kernel: tap200d878c-fe (unregistering): left promiscuous mode
Dec 10 20:13:50 compute-0 NetworkManager[56238]: <info>  [1765397630.2164] manager: (tap200d878c-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Dec 10 20:13:50 compute-0 ovn_controller[97701]: 2025-12-10T20:13:50Z|00138|binding|INFO|Claiming lport 200d878c-fe4b-43e4-bae3-5d660334bbc3 for this chassis.
Dec 10 20:13:50 compute-0 ovn_controller[97701]: 2025-12-10T20:13:50Z|00139|binding|INFO|200d878c-fe4b-43e4-bae3-5d660334bbc3: Claiming fa:16:3e:05:dd:d2 10.100.0.7
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.217 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.228 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:dd:d2 10.100.0.7'], port_security=['fa:16:3e:05:dd:d2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1de5d51a-1c96-47d3-9e57-500874113cc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d2be28c-5f23-435e-b8fc-cc5d72257618', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8eb26407c54625b02b8a9d59d7c0db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5627a2d3-cce8-4191-b32b-6955bcfdde6b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a0293b4-9385-4065-be4d-3094819c09e0, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=200d878c-fe4b-43e4-bae3-5d660334bbc3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:13:50 compute-0 ovn_controller[97701]: 2025-12-10T20:13:50Z|00140|binding|INFO|Releasing lport 200d878c-fe4b-43e4-bae3-5d660334bbc3 from this chassis (sb_readonly=0)
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.235 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.260 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:dd:d2 10.100.0.7'], port_security=['fa:16:3e:05:dd:d2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1de5d51a-1c96-47d3-9e57-500874113cc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d2be28c-5f23-435e-b8fc-cc5d72257618', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8eb26407c54625b02b8a9d59d7c0db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5627a2d3-cce8-4191-b32b-6955bcfdde6b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a0293b4-9385-4065-be4d-3094819c09e0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=200d878c-fe4b-43e4-bae3-5d660334bbc3) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:13:50 compute-0 neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618[250153]: [NOTICE]   (250157) : haproxy version is 2.8.14-c23fe91
Dec 10 20:13:50 compute-0 neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618[250153]: [NOTICE]   (250157) : path to executable is /usr/sbin/haproxy
Dec 10 20:13:50 compute-0 neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618[250153]: [WARNING]  (250157) : Exiting Master process...
Dec 10 20:13:50 compute-0 neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618[250153]: [WARNING]  (250157) : Exiting Master process...
Dec 10 20:13:50 compute-0 neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618[250153]: [ALERT]    (250157) : Current worker (250159) exited with code 143 (Terminated)
Dec 10 20:13:50 compute-0 neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618[250153]: [WARNING]  (250157) : All workers exited. Exiting... (0)
Dec 10 20:13:50 compute-0 systemd[1]: libpod-5b8c0124d2cfd55163f1570535d89464b4447c43c11f5928dd346670a9027979.scope: Deactivated successfully.
Dec 10 20:13:50 compute-0 podman[250325]: 2025-12-10 20:13:50.289657862 +0000 UTC m=+0.087644541 container died 5b8c0124d2cfd55163f1570535d89464b4447c43c11f5928dd346670a9027979 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.293 189283 INFO nova.virt.libvirt.driver [-] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Instance destroyed successfully.
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.293 189283 DEBUG nova.objects.instance [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lazy-loading 'resources' on Instance uuid 1de5d51a-1c96-47d3-9e57-500874113cc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.312 189283 DEBUG nova.virt.libvirt.vif [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-633682198',display_name='tempest-ServersTestJSON-server-633682198',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-633682198',id=11,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPas89f0VmbRy1Z20sjM939aVj0TNS+R5hgHhKIpN+Lu2sUioSpktjVErWL7xY1SOKpwoWvlEg9TaORbUb+yc3R318/CP5Gjft0vHca1BcBEnIu2/PQSvezTTIQ460wB3w==',key_name='tempest-keypair-887805161',keypairs=<?>,launch_index=0,launched_at=2025-12-10T20:13:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd8eb26407c54625b02b8a9d59d7c0db',ramdisk_id='',reservation_id='r-jfkck80y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-107536503',owner_user_name='tempest-ServersTestJSON-107536503-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T20:13:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95e701c408554b41bc92928902567588',uuid=1de5d51a-1c96-47d3-9e57-500874113cc5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "address": "fa:16:3e:05:dd:d2", "network": {"id": "5d2be28c-5f23-435e-b8fc-cc5d72257618", "bridge": "br-int", "label": "tempest-ServersTestJSON-1378806211-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8eb26407c54625b02b8a9d59d7c0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap200d878c-fe", "ovs_interfaceid": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.312 189283 DEBUG nova.network.os_vif_util [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Converting VIF {"id": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "address": "fa:16:3e:05:dd:d2", "network": {"id": "5d2be28c-5f23-435e-b8fc-cc5d72257618", "bridge": "br-int", "label": "tempest-ServersTestJSON-1378806211-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8eb26407c54625b02b8a9d59d7c0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap200d878c-fe", "ovs_interfaceid": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.313 189283 DEBUG nova.network.os_vif_util [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:dd:d2,bridge_name='br-int',has_traffic_filtering=True,id=200d878c-fe4b-43e4-bae3-5d660334bbc3,network=Network(5d2be28c-5f23-435e-b8fc-cc5d72257618),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap200d878c-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.313 189283 DEBUG os_vif [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:dd:d2,bridge_name='br-int',has_traffic_filtering=True,id=200d878c-fe4b-43e4-bae3-5d660334bbc3,network=Network(5d2be28c-5f23-435e-b8fc-cc5d72257618),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap200d878c-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.317 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.318 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap200d878c-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.320 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.321 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.324 189283 INFO os_vif [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:dd:d2,bridge_name='br-int',has_traffic_filtering=True,id=200d878c-fe4b-43e4-bae3-5d660334bbc3,network=Network(5d2be28c-5f23-435e-b8fc-cc5d72257618),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap200d878c-fe')
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.325 189283 INFO nova.virt.libvirt.driver [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Deleting instance files /var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5_del
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.326 189283 INFO nova.virt.libvirt.driver [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Deletion of /var/lib/nova/instances/1de5d51a-1c96-47d3-9e57-500874113cc5_del complete
Dec 10 20:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5b8c0124d2cfd55163f1570535d89464b4447c43c11f5928dd346670a9027979-userdata-shm.mount: Deactivated successfully.
Dec 10 20:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9391a3f87281ca579fe4ab0bbcf2bf8ff1467dab138ee38ef56319e14dfec0e-merged.mount: Deactivated successfully.
Dec 10 20:13:50 compute-0 podman[250325]: 2025-12-10 20:13:50.351427031 +0000 UTC m=+0.149413720 container cleanup 5b8c0124d2cfd55163f1570535d89464b4447c43c11f5928dd346670a9027979 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 10 20:13:50 compute-0 systemd[1]: libpod-conmon-5b8c0124d2cfd55163f1570535d89464b4447c43c11f5928dd346670a9027979.scope: Deactivated successfully.
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.404 189283 INFO nova.compute.manager [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Took 0.42 seconds to destroy the instance on the hypervisor.
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.405 189283 DEBUG oslo.service.loopingcall [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.405 189283 DEBUG nova.compute.manager [-] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.405 189283 DEBUG nova.network.neutron [-] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:13:50 compute-0 podman[250370]: 2025-12-10 20:13:50.489824632 +0000 UTC m=+0.105163694 container remove 5b8c0124d2cfd55163f1570535d89464b4447c43c11f5928dd346670a9027979 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.514 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[5b3c595b-11a2-4d97-a529-22403811fc85]: (4, ('Wed Dec 10 08:13:50 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618 (5b8c0124d2cfd55163f1570535d89464b4447c43c11f5928dd346670a9027979)\n5b8c0124d2cfd55163f1570535d89464b4447c43c11f5928dd346670a9027979\nWed Dec 10 08:13:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618 (5b8c0124d2cfd55163f1570535d89464b4447c43c11f5928dd346670a9027979)\n5b8c0124d2cfd55163f1570535d89464b4447c43c11f5928dd346670a9027979\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.516 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[d2b7e025-4319-42ac-87f1-de6be14d10f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.517 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d2be28c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:50 compute-0 kernel: tap5d2be28c-50: left promiscuous mode
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.520 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.530 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5abf49-b7b7-4e6c-9c7f-8f39d96c2e9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:50 compute-0 nova_compute[189279]: 2025-12-10 20:13:50.539 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.548 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[df666445-40d3-4d85-8959-9feef6123883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.550 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[2411970a-b7d8-4e36-8436-5c2b298bc42b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.577 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[046e128c-a38f-44ad-9efd-f78472117fd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495265, 'reachable_time': 38774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250388, 'error': None, 'target': 'ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.580 106676 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d2be28c-5f23-435e-b8fc-cc5d72257618 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.581 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[f0785869-e5f1-4e8b-9bd7-be3e1874bacb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.582 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 200d878c-fe4b-43e4-bae3-5d660334bbc3 in datapath 5d2be28c-5f23-435e-b8fc-cc5d72257618 unbound from our chassis
Dec 10 20:13:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d5d2be28c\x2d5f23\x2d435e\x2db8fc\x2dcc5d72257618.mount: Deactivated successfully.
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.584 106564 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d2be28c-5f23-435e-b8fc-cc5d72257618, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.585 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[02a597b7-f86b-4ad8-8ead-c86089bb5d27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.586 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 200d878c-fe4b-43e4-bae3-5d660334bbc3 in datapath 5d2be28c-5f23-435e-b8fc-cc5d72257618 unbound from our chassis
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.588 106564 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d2be28c-5f23-435e-b8fc-cc5d72257618, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 10 20:13:50 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:50.589 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[90a3f3bf-64b1-44f7-a689-40d8f4fd1575]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:13:51 compute-0 nova_compute[189279]: 2025-12-10 20:13:51.976 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:52 compute-0 podman[250397]: 2025-12-10 20:13:52.110949577 +0000 UTC m=+0.078021549 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:13:52 compute-0 podman[250396]: 2025-12-10 20:13:52.123087106 +0000 UTC m=+0.097338692 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.295 189283 DEBUG nova.compute.manager [req-1b1ae820-78ab-4fc3-95c8-7c0c562ca427 req-c1ac7d80-c912-4423-9b72-d7214b1d94b5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Received event network-vif-unplugged-200d878c-fe4b-43e4-bae3-5d660334bbc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.296 189283 DEBUG oslo_concurrency.lockutils [req-1b1ae820-78ab-4fc3-95c8-7c0c562ca427 req-c1ac7d80-c912-4423-9b72-d7214b1d94b5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.296 189283 DEBUG oslo_concurrency.lockutils [req-1b1ae820-78ab-4fc3-95c8-7c0c562ca427 req-c1ac7d80-c912-4423-9b72-d7214b1d94b5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.296 189283 DEBUG oslo_concurrency.lockutils [req-1b1ae820-78ab-4fc3-95c8-7c0c562ca427 req-c1ac7d80-c912-4423-9b72-d7214b1d94b5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.296 189283 DEBUG nova.compute.manager [req-1b1ae820-78ab-4fc3-95c8-7c0c562ca427 req-c1ac7d80-c912-4423-9b72-d7214b1d94b5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] No waiting events found dispatching network-vif-unplugged-200d878c-fe4b-43e4-bae3-5d660334bbc3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.296 189283 DEBUG nova.compute.manager [req-1b1ae820-78ab-4fc3-95c8-7c0c562ca427 req-c1ac7d80-c912-4423-9b72-d7214b1d94b5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Received event network-vif-unplugged-200d878c-fe4b-43e4-bae3-5d660334bbc3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.297 189283 DEBUG nova.compute.manager [req-1b1ae820-78ab-4fc3-95c8-7c0c562ca427 req-c1ac7d80-c912-4423-9b72-d7214b1d94b5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Received event network-vif-plugged-200d878c-fe4b-43e4-bae3-5d660334bbc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.297 189283 DEBUG oslo_concurrency.lockutils [req-1b1ae820-78ab-4fc3-95c8-7c0c562ca427 req-c1ac7d80-c912-4423-9b72-d7214b1d94b5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.297 189283 DEBUG oslo_concurrency.lockutils [req-1b1ae820-78ab-4fc3-95c8-7c0c562ca427 req-c1ac7d80-c912-4423-9b72-d7214b1d94b5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.297 189283 DEBUG oslo_concurrency.lockutils [req-1b1ae820-78ab-4fc3-95c8-7c0c562ca427 req-c1ac7d80-c912-4423-9b72-d7214b1d94b5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.297 189283 DEBUG nova.compute.manager [req-1b1ae820-78ab-4fc3-95c8-7c0c562ca427 req-c1ac7d80-c912-4423-9b72-d7214b1d94b5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] No waiting events found dispatching network-vif-plugged-200d878c-fe4b-43e4-bae3-5d660334bbc3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.297 189283 WARNING nova.compute.manager [req-1b1ae820-78ab-4fc3-95c8-7c0c562ca427 req-c1ac7d80-c912-4423-9b72-d7214b1d94b5 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Received unexpected event network-vif-plugged-200d878c-fe4b-43e4-bae3-5d660334bbc3 for instance with vm_state active and task_state deleting.
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.348 189283 DEBUG nova.network.neutron [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Updated VIF entry in instance network info cache for port 200d878c-fe4b-43e4-bae3-5d660334bbc3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.349 189283 DEBUG nova.network.neutron [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Updating instance_info_cache with network_info: [{"id": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "address": "fa:16:3e:05:dd:d2", "network": {"id": "5d2be28c-5f23-435e-b8fc-cc5d72257618", "bridge": "br-int", "label": "tempest-ServersTestJSON-1378806211-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8eb26407c54625b02b8a9d59d7c0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap200d878c-fe", "ovs_interfaceid": "200d878c-fe4b-43e4-bae3-5d660334bbc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.367 189283 DEBUG oslo_concurrency.lockutils [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-1de5d51a-1c96-47d3-9e57-500874113cc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:13:52 compute-0 nova_compute[189279]: 2025-12-10 20:13:52.368 189283 DEBUG nova.compute.manager [req-6ab469c7-ef3d-4a8c-8041-5bfbca3af154 req-4c9e625d-25b9-4c48-901f-df36d8214798 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Received event network-vif-deleted-ac26be7d-6e8b-41ce-b924-41df4889751e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:52 compute-0 ovn_controller[97701]: 2025-12-10T20:13:52Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:ab:64 10.100.0.14
Dec 10 20:13:52 compute-0 ovn_controller[97701]: 2025-12-10T20:13:52Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:ab:64 10.100.0.14
Dec 10 20:13:53 compute-0 nova_compute[189279]: 2025-12-10 20:13:53.116 189283 DEBUG nova.network.neutron [-] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:13:53 compute-0 nova_compute[189279]: 2025-12-10 20:13:53.139 189283 INFO nova.compute.manager [-] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Took 2.73 seconds to deallocate network for instance.
Dec 10 20:13:53 compute-0 nova_compute[189279]: 2025-12-10 20:13:53.180 189283 DEBUG oslo_concurrency.lockutils [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:13:53 compute-0 nova_compute[189279]: 2025-12-10 20:13:53.181 189283 DEBUG oslo_concurrency.lockutils [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:13:53 compute-0 nova_compute[189279]: 2025-12-10 20:13:53.305 189283 DEBUG nova.compute.provider_tree [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:13:53 compute-0 nova_compute[189279]: 2025-12-10 20:13:53.319 189283 DEBUG nova.scheduler.client.report [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:13:53 compute-0 nova_compute[189279]: 2025-12-10 20:13:53.341 189283 DEBUG oslo_concurrency.lockutils [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:53 compute-0 nova_compute[189279]: 2025-12-10 20:13:53.366 189283 INFO nova.scheduler.client.report [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Deleted allocations for instance 1de5d51a-1c96-47d3-9e57-500874113cc5
Dec 10 20:13:53 compute-0 nova_compute[189279]: 2025-12-10 20:13:53.422 189283 DEBUG oslo_concurrency.lockutils [None req-3b914869-b30a-44e1-951b-8e91cca53b6b 95e701c408554b41bc92928902567588 fd8eb26407c54625b02b8a9d59d7c0db - - default default] Lock "1de5d51a-1c96-47d3-9e57-500874113cc5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:13:54 compute-0 nova_compute[189279]: 2025-12-10 20:13:54.349 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:54 compute-0 nova_compute[189279]: 2025-12-10 20:13:54.617 189283 DEBUG nova.compute.manager [req-f708337a-fb3d-44f7-9487-432a684f371e req-79f45fe1-3c77-4ffd-a4d7-5bc623625721 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Received event network-vif-deleted-200d878c-fe4b-43e4-bae3-5d660334bbc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:13:55 compute-0 podman[250439]: 2025-12-10 20:13:55.205081697 +0000 UTC m=+0.174368164 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Dec 10 20:13:55 compute-0 nova_compute[189279]: 2025-12-10 20:13:55.322 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:56.696 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:13:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:56.698 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 20:13:56 compute-0 nova_compute[189279]: 2025-12-10 20:13:56.697 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:56 compute-0 nova_compute[189279]: 2025-12-10 20:13:56.980 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:57 compute-0 ovn_controller[97701]: 2025-12-10T20:13:57Z|00141|binding|INFO|Releasing lport 86b58a68-ab3c-4f05-ad6f-70a78da6a224 from this chassis (sb_readonly=0)
Dec 10 20:13:57 compute-0 ovn_controller[97701]: 2025-12-10T20:13:57Z|00142|binding|INFO|Releasing lport c6649cf0-8544-4fa3-a1cf-44dddb6fbbdc from this chassis (sb_readonly=0)
Dec 10 20:13:57 compute-0 ovn_controller[97701]: 2025-12-10T20:13:57Z|00143|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:13:57 compute-0 nova_compute[189279]: 2025-12-10 20:13:57.216 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:13:57 compute-0 nova_compute[189279]: 2025-12-10 20:13:57.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:13:57 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:13:57.701 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:13:58 compute-0 nova_compute[189279]: 2025-12-10 20:13:58.268 189283 INFO nova.compute.manager [None req-d798ad4b-0841-4bad-9744-57f4c1765f77 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Get console output
Dec 10 20:13:58 compute-0 nova_compute[189279]: 2025-12-10 20:13:58.404 239292 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 10 20:13:59 compute-0 nova_compute[189279]: 2025-12-10 20:13:59.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:13:59 compute-0 podman[203484]: time="2025-12-10T20:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:13:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31989 "" "Go-http-client/1.1"
Dec 10 20:13:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5726 "" "Go-http-client/1.1"
Dec 10 20:14:00 compute-0 podman[250465]: 2025-12-10 20:14:00.135926225 +0000 UTC m=+0.110551429 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251210, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 10 20:14:00 compute-0 nova_compute[189279]: 2025-12-10 20:14:00.326 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:00 compute-0 nova_compute[189279]: 2025-12-10 20:14:00.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:14:00 compute-0 nova_compute[189279]: 2025-12-10 20:14:00.581 189283 DEBUG nova.compute.manager [req-ff870d84-15c9-4562-9745-7708c00465d7 req-b36f8f44-2981-447c-a1b2-541ded59a43f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Received event network-changed-3ae03bc4-7221-4da1-8e97-1a1ea168ac84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:00 compute-0 nova_compute[189279]: 2025-12-10 20:14:00.581 189283 DEBUG nova.compute.manager [req-ff870d84-15c9-4562-9745-7708c00465d7 req-b36f8f44-2981-447c-a1b2-541ded59a43f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Refreshing instance network info cache due to event network-changed-3ae03bc4-7221-4da1-8e97-1a1ea168ac84. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:14:00 compute-0 nova_compute[189279]: 2025-12-10 20:14:00.581 189283 DEBUG oslo_concurrency.lockutils [req-ff870d84-15c9-4562-9745-7708c00465d7 req-b36f8f44-2981-447c-a1b2-541ded59a43f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:14:00 compute-0 nova_compute[189279]: 2025-12-10 20:14:00.582 189283 DEBUG oslo_concurrency.lockutils [req-ff870d84-15c9-4562-9745-7708c00465d7 req-b36f8f44-2981-447c-a1b2-541ded59a43f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:14:00 compute-0 nova_compute[189279]: 2025-12-10 20:14:00.582 189283 DEBUG nova.network.neutron [req-ff870d84-15c9-4562-9745-7708c00465d7 req-b36f8f44-2981-447c-a1b2-541ded59a43f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Refreshing network info cache for port 3ae03bc4-7221-4da1-8e97-1a1ea168ac84 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:14:01 compute-0 openstack_network_exporter[205632]: ERROR   20:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:14:01 compute-0 openstack_network_exporter[205632]: ERROR   20:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:14:01 compute-0 openstack_network_exporter[205632]: ERROR   20:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:14:01 compute-0 openstack_network_exporter[205632]: ERROR   20:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:14:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:14:01 compute-0 openstack_network_exporter[205632]: ERROR   20:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:14:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:14:01 compute-0 nova_compute[189279]: 2025-12-10 20:14:01.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:14:01 compute-0 nova_compute[189279]: 2025-12-10 20:14:01.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:14:01 compute-0 nova_compute[189279]: 2025-12-10 20:14:01.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:14:01 compute-0 nova_compute[189279]: 2025-12-10 20:14:01.754 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:14:01 compute-0 nova_compute[189279]: 2025-12-10 20:14:01.754 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:14:01 compute-0 nova_compute[189279]: 2025-12-10 20:14:01.755 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:14:01 compute-0 nova_compute[189279]: 2025-12-10 20:14:01.755 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 63639261-d8d9-46e1-8b3f-55af36a85e58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:14:01 compute-0 nova_compute[189279]: 2025-12-10 20:14:01.847 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:01 compute-0 nova_compute[189279]: 2025-12-10 20:14:01.982 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:02 compute-0 ovn_controller[97701]: 2025-12-10T20:14:02Z|00144|binding|INFO|Releasing lport 86b58a68-ab3c-4f05-ad6f-70a78da6a224 from this chassis (sb_readonly=0)
Dec 10 20:14:02 compute-0 ovn_controller[97701]: 2025-12-10T20:14:02Z|00145|binding|INFO|Releasing lport c6649cf0-8544-4fa3-a1cf-44dddb6fbbdc from this chassis (sb_readonly=0)
Dec 10 20:14:02 compute-0 ovn_controller[97701]: 2025-12-10T20:14:02Z|00146|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:14:02 compute-0 nova_compute[189279]: 2025-12-10 20:14:02.532 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:03 compute-0 nova_compute[189279]: 2025-12-10 20:14:03.235 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765397628.2339845, 47d38c42-e665-400f-831e-4bb560cd5fdb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:14:03 compute-0 nova_compute[189279]: 2025-12-10 20:14:03.236 189283 INFO nova.compute.manager [-] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] VM Stopped (Lifecycle Event)
Dec 10 20:14:03 compute-0 nova_compute[189279]: 2025-12-10 20:14:03.268 189283 DEBUG nova.compute.manager [None req-90f247c0-2990-4889-9768-3f51b610c512 - - - - - -] [instance: 47d38c42-e665-400f-831e-4bb560cd5fdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:03 compute-0 nova_compute[189279]: 2025-12-10 20:14:03.392 189283 DEBUG nova.network.neutron [req-ff870d84-15c9-4562-9745-7708c00465d7 req-b36f8f44-2981-447c-a1b2-541ded59a43f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Updated VIF entry in instance network info cache for port 3ae03bc4-7221-4da1-8e97-1a1ea168ac84. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:14:03 compute-0 nova_compute[189279]: 2025-12-10 20:14:03.393 189283 DEBUG nova.network.neutron [req-ff870d84-15c9-4562-9745-7708c00465d7 req-b36f8f44-2981-447c-a1b2-541ded59a43f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Updating instance_info_cache with network_info: [{"id": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "address": "fa:16:3e:a8:ab:64", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae03bc4-72", "ovs_interfaceid": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:14:03 compute-0 nova_compute[189279]: 2025-12-10 20:14:03.419 189283 DEBUG oslo_concurrency.lockutils [req-ff870d84-15c9-4562-9745-7708c00465d7 req-b36f8f44-2981-447c-a1b2-541ded59a43f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:14:04 compute-0 nova_compute[189279]: 2025-12-10 20:14:04.850 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Updating instance_info_cache with network_info: [{"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:14:04 compute-0 nova_compute[189279]: 2025-12-10 20:14:04.889 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:14:04 compute-0 nova_compute[189279]: 2025-12-10 20:14:04.890 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:14:04 compute-0 nova_compute[189279]: 2025-12-10 20:14:04.891 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:14:04 compute-0 nova_compute[189279]: 2025-12-10 20:14:04.891 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:14:04 compute-0 nova_compute[189279]: 2025-12-10 20:14:04.891 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:14:04 compute-0 nova_compute[189279]: 2025-12-10 20:14:04.892 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:14:04 compute-0 nova_compute[189279]: 2025-12-10 20:14:04.922 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:04 compute-0 nova_compute[189279]: 2025-12-10 20:14:04.923 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:04 compute-0 nova_compute[189279]: 2025-12-10 20:14:04.924 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:04 compute-0 nova_compute[189279]: 2025-12-10 20:14:04.924 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:14:05 compute-0 nova_compute[189279]: 2025-12-10 20:14:05.031 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:05 compute-0 nova_compute[189279]: 2025-12-10 20:14:05.093 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:05 compute-0 nova_compute[189279]: 2025-12-10 20:14:05.094 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:05 compute-0 nova_compute[189279]: 2025-12-10 20:14:05.159 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:05 compute-0 nova_compute[189279]: 2025-12-10 20:14:05.167 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:05 compute-0 nova_compute[189279]: 2025-12-10 20:14:05.236 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:05 compute-0 nova_compute[189279]: 2025-12-10 20:14:05.237 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:05 compute-0 nova_compute[189279]: 2025-12-10 20:14:05.288 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765397630.2862098, 1de5d51a-1c96-47d3-9e57-500874113cc5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:14:05 compute-0 nova_compute[189279]: 2025-12-10 20:14:05.289 189283 INFO nova.compute.manager [-] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] VM Stopped (Lifecycle Event)
Dec 10 20:14:05 compute-0 nova_compute[189279]: 2025-12-10 20:14:05.299 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:05 compute-0 nova_compute[189279]: 2025-12-10 20:14:05.305 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:05 compute-0 nova_compute[189279]: 2025-12-10 20:14:05.323 189283 DEBUG nova.compute.manager [None req-9a269090-4f40-4242-9806-6b7f57b3e4b7 - - - - - -] [instance: 1de5d51a-1c96-47d3-9e57-500874113cc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:05 compute-0 nova_compute[189279]: 2025-12-10 20:14:05.331 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.068 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk --force-share --output=json" returned: 0 in 0.763s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.069 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.132 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.565 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.568 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4765MB free_disk=72.24370574951172GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.568 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.569 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.687 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 63639261-d8d9-46e1-8b3f-55af36a85e58 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.688 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 81f60881-4334-4ede-a10d-454a7e8a4154 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.688 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance a4a66175-57ff-48da-8473-e93f72da4499 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.689 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.689 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.784 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.803 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.828 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.828 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:06 compute-0 nova_compute[189279]: 2025-12-10 20:14:06.985 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:07 compute-0 nova_compute[189279]: 2025-12-10 20:14:07.428 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:14:08 compute-0 nova_compute[189279]: 2025-12-10 20:14:08.680 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:09 compute-0 podman[250503]: 2025-12-10 20:14:09.152163164 +0000 UTC m=+0.110694293 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:14:09 compute-0 podman[250504]: 2025-12-10 20:14:09.167277432 +0000 UTC m=+0.123410546 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64)
Dec 10 20:14:09 compute-0 nova_compute[189279]: 2025-12-10 20:14:09.699 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:09 compute-0 nova_compute[189279]: 2025-12-10 20:14:09.700 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:09 compute-0 nova_compute[189279]: 2025-12-10 20:14:09.721 189283 DEBUG nova.compute.manager [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 20:14:09 compute-0 nova_compute[189279]: 2025-12-10 20:14:09.778 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:09 compute-0 nova_compute[189279]: 2025-12-10 20:14:09.778 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:09 compute-0 nova_compute[189279]: 2025-12-10 20:14:09.785 189283 DEBUG nova.virt.hardware [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 20:14:09 compute-0 nova_compute[189279]: 2025-12-10 20:14:09.785 189283 INFO nova.compute.claims [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Claim successful on node compute-0.ctlplane.example.com
Dec 10 20:14:09 compute-0 nova_compute[189279]: 2025-12-10 20:14:09.987 189283 DEBUG nova.compute.provider_tree [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.002 189283 DEBUG nova.scheduler.client.report [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.018 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.019 189283 DEBUG nova.compute.manager [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.078 189283 DEBUG nova.compute.manager [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.078 189283 DEBUG nova.network.neutron [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.101 189283 INFO nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.117 189283 DEBUG nova.compute.manager [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.220 189283 DEBUG nova.compute.manager [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.221 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.222 189283 INFO nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Creating image(s)
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.223 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "/var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.223 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "/var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.224 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "/var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.237 189283 DEBUG oslo_concurrency.processutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.335 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.342 189283 DEBUG oslo_concurrency.processutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.344 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.344 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.359 189283 DEBUG oslo_concurrency.processutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.427 189283 DEBUG oslo_concurrency.processutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.429 189283 DEBUG oslo_concurrency.processutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.480 189283 DEBUG oslo_concurrency.processutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/disk 1073741824" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.481 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.482 189283 DEBUG oslo_concurrency.processutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.514 189283 DEBUG nova.policy [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '598a18069aae495194ab1b43958530aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a51cea6d1cb40c383b87a400100e902', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.575 189283 DEBUG oslo_concurrency.processutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.576 189283 DEBUG nova.virt.disk.api [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Checking if we can resize image /var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.577 189283 DEBUG oslo_concurrency.processutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.651 189283 DEBUG oslo_concurrency.processutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.653 189283 DEBUG nova.virt.disk.api [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Cannot resize image /var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.653 189283 DEBUG nova.objects.instance [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lazy-loading 'migration_context' on Instance uuid a6e19ece-bf39-4c33-bf2a-857b75ae2ca1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.669 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.670 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Ensure instance console log exists: /var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.671 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.671 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:10 compute-0 nova_compute[189279]: 2025-12-10 20:14:10.672 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:11 compute-0 nova_compute[189279]: 2025-12-10 20:14:11.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:14:11 compute-0 nova_compute[189279]: 2025-12-10 20:14:11.850 189283 DEBUG nova.network.neutron [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Successfully created port: 88679bfc-126b-4704-b224-65b502faa33c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 10 20:14:11 compute-0 nova_compute[189279]: 2025-12-10 20:14:11.989 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:13 compute-0 nova_compute[189279]: 2025-12-10 20:14:13.452 189283 DEBUG nova.network.neutron [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Successfully updated port: 88679bfc-126b-4704-b224-65b502faa33c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 20:14:13 compute-0 nova_compute[189279]: 2025-12-10 20:14:13.475 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "refresh_cache-a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:14:13 compute-0 nova_compute[189279]: 2025-12-10 20:14:13.476 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquired lock "refresh_cache-a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:14:13 compute-0 nova_compute[189279]: 2025-12-10 20:14:13.476 189283 DEBUG nova.network.neutron [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:14:13 compute-0 nova_compute[189279]: 2025-12-10 20:14:13.716 189283 DEBUG nova.objects.instance [None req-dfffb805-2948-41cd-ae02-8cf2ea1c554d 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lazy-loading 'flavor' on Instance uuid 81f60881-4334-4ede-a10d-454a7e8a4154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:14:13 compute-0 nova_compute[189279]: 2025-12-10 20:14:13.759 189283 DEBUG oslo_concurrency.lockutils [None req-dfffb805-2948-41cd-ae02-8cf2ea1c554d 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquiring lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:14:13 compute-0 nova_compute[189279]: 2025-12-10 20:14:13.760 189283 DEBUG oslo_concurrency.lockutils [None req-dfffb805-2948-41cd-ae02-8cf2ea1c554d 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquired lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:14:14 compute-0 nova_compute[189279]: 2025-12-10 20:14:14.004 189283 DEBUG nova.compute.manager [req-16a1ac7c-5ed5-4de4-95fa-45e3919405b7 req-d465b197-6aca-41ff-a7aa-eef7e70c5a56 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Received event network-changed-88679bfc-126b-4704-b224-65b502faa33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:14 compute-0 nova_compute[189279]: 2025-12-10 20:14:14.005 189283 DEBUG nova.compute.manager [req-16a1ac7c-5ed5-4de4-95fa-45e3919405b7 req-d465b197-6aca-41ff-a7aa-eef7e70c5a56 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Refreshing instance network info cache due to event network-changed-88679bfc-126b-4704-b224-65b502faa33c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:14:14 compute-0 nova_compute[189279]: 2025-12-10 20:14:14.006 189283 DEBUG oslo_concurrency.lockutils [req-16a1ac7c-5ed5-4de4-95fa-45e3919405b7 req-d465b197-6aca-41ff-a7aa-eef7e70c5a56 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:14:14 compute-0 nova_compute[189279]: 2025-12-10 20:14:14.084 189283 DEBUG nova.network.neutron [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 20:14:15 compute-0 nova_compute[189279]: 2025-12-10 20:14:15.340 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.118 189283 DEBUG nova.network.neutron [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Updating instance_info_cache with network_info: [{"id": "88679bfc-126b-4704-b224-65b502faa33c", "address": "fa:16:3e:60:f8:d3", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88679bfc-12", "ovs_interfaceid": "88679bfc-126b-4704-b224-65b502faa33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.138 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Releasing lock "refresh_cache-a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.139 189283 DEBUG nova.compute.manager [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Instance network_info: |[{"id": "88679bfc-126b-4704-b224-65b502faa33c", "address": "fa:16:3e:60:f8:d3", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88679bfc-12", "ovs_interfaceid": "88679bfc-126b-4704-b224-65b502faa33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.140 189283 DEBUG oslo_concurrency.lockutils [req-16a1ac7c-5ed5-4de4-95fa-45e3919405b7 req-d465b197-6aca-41ff-a7aa-eef7e70c5a56 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.141 189283 DEBUG nova.network.neutron [req-16a1ac7c-5ed5-4de4-95fa-45e3919405b7 req-d465b197-6aca-41ff-a7aa-eef7e70c5a56 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Refreshing network info cache for port 88679bfc-126b-4704-b224-65b502faa33c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.146 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Start _get_guest_xml network_info=[{"id": "88679bfc-126b-4704-b224-65b502faa33c", "address": "fa:16:3e:60:f8:d3", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88679bfc-12", "ovs_interfaceid": "88679bfc-126b-4704-b224-65b502faa33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '33b11153-486b-4d32-bc63-6b6a6ed0b704'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.158 189283 WARNING nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.174 189283 DEBUG nova.virt.libvirt.host [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.175 189283 DEBUG nova.virt.libvirt.host [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.182 189283 DEBUG nova.virt.libvirt.host [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.183 189283 DEBUG nova.virt.libvirt.host [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.184 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.185 189283 DEBUG nova.virt.hardware [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T20:11:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.185 189283 DEBUG nova.virt.hardware [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.186 189283 DEBUG nova.virt.hardware [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.186 189283 DEBUG nova.virt.hardware [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.187 189283 DEBUG nova.virt.hardware [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.187 189283 DEBUG nova.virt.hardware [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.187 189283 DEBUG nova.virt.hardware [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.188 189283 DEBUG nova.virt.hardware [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.188 189283 DEBUG nova.virt.hardware [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.189 189283 DEBUG nova.virt.hardware [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.189 189283 DEBUG nova.virt.hardware [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.197 189283 DEBUG nova.virt.libvirt.vif [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:14:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-533550386',display_name='tempest-TestNetworkBasicOps-server-533550386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-533550386',id=13,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGsc3usGSP/eb9jvsTnTbTDerbvN0ujKXnuP5Gvg8Yxo/cp4pbqHTtwR/dY8oDnL/K7RXoxdyL671S0DK/mzUQmJB9rBBRMBy2+GhTJk137Df4WJHorZu/n2ySj7/2KngA==',key_name='tempest-TestNetworkBasicOps-539295554',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a51cea6d1cb40c383b87a400100e902',ramdisk_id='',reservation_id='r-l0lg2y0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1301966146',owner_user_name='tempest-TestNetworkBasicOps-1301966146-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:14:10Z,user_data=None,user_id='598a18069aae495194ab1b43958530aa',uuid=a6e19ece-bf39-4c33-bf2a-857b75ae2ca1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88679bfc-126b-4704-b224-65b502faa33c", "address": "fa:16:3e:60:f8:d3", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88679bfc-12", "ovs_interfaceid": "88679bfc-126b-4704-b224-65b502faa33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.197 189283 DEBUG nova.network.os_vif_util [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Converting VIF {"id": "88679bfc-126b-4704-b224-65b502faa33c", "address": "fa:16:3e:60:f8:d3", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88679bfc-12", "ovs_interfaceid": "88679bfc-126b-4704-b224-65b502faa33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.198 189283 DEBUG nova.network.os_vif_util [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:f8:d3,bridge_name='br-int',has_traffic_filtering=True,id=88679bfc-126b-4704-b224-65b502faa33c,network=Network(4388b363-773a-4716-8c7d-00d02392bfdb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88679bfc-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.200 189283 DEBUG nova.objects.instance [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lazy-loading 'pci_devices' on Instance uuid a6e19ece-bf39-4c33-bf2a-857b75ae2ca1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.215 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] End _get_guest_xml xml=<domain type="kvm">
Dec 10 20:14:16 compute-0 nova_compute[189279]:   <uuid>a6e19ece-bf39-4c33-bf2a-857b75ae2ca1</uuid>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   <name>instance-0000000d</name>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   <memory>131072</memory>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   <metadata>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <nova:name>tempest-TestNetworkBasicOps-server-533550386</nova:name>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 20:14:16</nova:creationTime>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <nova:flavor name="m1.nano">
Dec 10 20:14:16 compute-0 nova_compute[189279]:         <nova:memory>128</nova:memory>
Dec 10 20:14:16 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 20:14:16 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 20:14:16 compute-0 nova_compute[189279]:         <nova:ephemeral>0</nova:ephemeral>
Dec 10 20:14:16 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 20:14:16 compute-0 nova_compute[189279]:         <nova:user uuid="598a18069aae495194ab1b43958530aa">tempest-TestNetworkBasicOps-1301966146-project-member</nova:user>
Dec 10 20:14:16 compute-0 nova_compute[189279]:         <nova:project uuid="8a51cea6d1cb40c383b87a400100e902">tempest-TestNetworkBasicOps-1301966146</nova:project>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="33b11153-486b-4d32-bc63-6b6a6ed0b704"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 20:14:16 compute-0 nova_compute[189279]:         <nova:port uuid="88679bfc-126b-4704-b224-65b502faa33c">
Dec 10 20:14:16 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   </metadata>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <system>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <entry name="serial">a6e19ece-bf39-4c33-bf2a-857b75ae2ca1</entry>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <entry name="uuid">a6e19ece-bf39-4c33-bf2a-857b75ae2ca1</entry>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     </system>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   <os>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   </os>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   <features>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <apic/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   </features>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   </clock>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   </cpu>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   <devices>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/disk"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/disk.config"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:60:f8:d3"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <target dev="tap88679bfc-12"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     </interface>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/console.log" append="off"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     </serial>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <video>
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     </video>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     </rng>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 20:14:16 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 20:14:16 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 20:14:16 compute-0 nova_compute[189279]:   </devices>
Dec 10 20:14:16 compute-0 nova_compute[189279]: </domain>
Dec 10 20:14:16 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.216 189283 DEBUG nova.compute.manager [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Preparing to wait for external event network-vif-plugged-88679bfc-126b-4704-b224-65b502faa33c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.216 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.217 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.217 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.218 189283 DEBUG nova.virt.libvirt.vif [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:14:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-533550386',display_name='tempest-TestNetworkBasicOps-server-533550386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-533550386',id=13,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGsc3usGSP/eb9jvsTnTbTDerbvN0ujKXnuP5Gvg8Yxo/cp4pbqHTtwR/dY8oDnL/K7RXoxdyL671S0DK/mzUQmJB9rBBRMBy2+GhTJk137Df4WJHorZu/n2ySj7/2KngA==',key_name='tempest-TestNetworkBasicOps-539295554',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a51cea6d1cb40c383b87a400100e902',ramdisk_id='',reservation_id='r-l0lg2y0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1301966146',owner_user_name='tempest-TestNetworkBasicOps-1301966146-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:14:10Z,user_data=None,user_id='598a18069aae495194ab1b43958530aa',uuid=a6e19ece-bf39-4c33-bf2a-857b75ae2ca1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88679bfc-126b-4704-b224-65b502faa33c", "address": "fa:16:3e:60:f8:d3", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88679bfc-12", "ovs_interfaceid": "88679bfc-126b-4704-b224-65b502faa33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.219 189283 DEBUG nova.network.os_vif_util [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Converting VIF {"id": "88679bfc-126b-4704-b224-65b502faa33c", "address": "fa:16:3e:60:f8:d3", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88679bfc-12", "ovs_interfaceid": "88679bfc-126b-4704-b224-65b502faa33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.220 189283 DEBUG nova.network.os_vif_util [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:f8:d3,bridge_name='br-int',has_traffic_filtering=True,id=88679bfc-126b-4704-b224-65b502faa33c,network=Network(4388b363-773a-4716-8c7d-00d02392bfdb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88679bfc-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.221 189283 DEBUG os_vif [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:f8:d3,bridge_name='br-int',has_traffic_filtering=True,id=88679bfc-126b-4704-b224-65b502faa33c,network=Network(4388b363-773a-4716-8c7d-00d02392bfdb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88679bfc-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.222 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.222 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.223 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.229 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.230 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88679bfc-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.231 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88679bfc-12, col_values=(('external_ids', {'iface-id': '88679bfc-126b-4704-b224-65b502faa33c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:f8:d3', 'vm-uuid': 'a6e19ece-bf39-4c33-bf2a-857b75ae2ca1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:16 compute-0 NetworkManager[56238]: <info>  [1765397656.2363] manager: (tap88679bfc-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.239 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.248 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.249 189283 INFO os_vif [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:f8:d3,bridge_name='br-int',has_traffic_filtering=True,id=88679bfc-126b-4704-b224-65b502faa33c,network=Network(4388b363-773a-4716-8c7d-00d02392bfdb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88679bfc-12')
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.299 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.300 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.300 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] No VIF found with MAC fa:16:3e:60:f8:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.301 189283 INFO nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Using config drive
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.467 189283 DEBUG nova.network.neutron [None req-dfffb805-2948-41cd-ae02-8cf2ea1c554d 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.605 189283 DEBUG oslo_concurrency.lockutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquiring lock "63639261-d8d9-46e1-8b3f-55af36a85e58" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.606 189283 DEBUG oslo_concurrency.lockutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.607 189283 INFO nova.compute.manager [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Rebooting instance
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.624 189283 DEBUG oslo_concurrency.lockutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquiring lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.624 189283 DEBUG oslo_concurrency.lockutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquired lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.625 189283 DEBUG nova.network.neutron [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.700 189283 DEBUG nova.compute.manager [req-bffdadc8-ea72-4a4f-9795-51b7ba5b0cc5 req-229f3916-d38a-472b-94d7-c8c1508198dd 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Received event network-changed-42ea5f6d-dd00-4169-8385-3b8709530411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.701 189283 DEBUG nova.compute.manager [req-bffdadc8-ea72-4a4f-9795-51b7ba5b0cc5 req-229f3916-d38a-472b-94d7-c8c1508198dd 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Refreshing instance network info cache due to event network-changed-42ea5f6d-dd00-4169-8385-3b8709530411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.702 189283 DEBUG oslo_concurrency.lockutils [req-bffdadc8-ea72-4a4f-9795-51b7ba5b0cc5 req-229f3916-d38a-472b-94d7-c8c1508198dd 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:14:16 compute-0 nova_compute[189279]: 2025-12-10 20:14:16.992 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:17 compute-0 podman[250567]: 2025-12-10 20:14:17.134550561 +0000 UTC m=+0.101172196 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, distribution-scope=public, release=1214.1726694543, release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, name=ubi9)
Dec 10 20:14:17 compute-0 podman[250565]: 2025-12-10 20:14:17.138978721 +0000 UTC m=+0.116077669 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 10 20:14:17 compute-0 podman[250566]: 2025-12-10 20:14:17.156763151 +0000 UTC m=+0.124060124 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 10 20:14:17 compute-0 nova_compute[189279]: 2025-12-10 20:14:17.247 189283 INFO nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Creating config drive at /var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/disk.config
Dec 10 20:14:17 compute-0 nova_compute[189279]: 2025-12-10 20:14:17.256 189283 DEBUG oslo_concurrency.processutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpip4ve8s4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:17 compute-0 nova_compute[189279]: 2025-12-10 20:14:17.385 189283 DEBUG oslo_concurrency.processutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpip4ve8s4" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:17 compute-0 kernel: tap88679bfc-12: entered promiscuous mode
Dec 10 20:14:17 compute-0 NetworkManager[56238]: <info>  [1765397657.4845] manager: (tap88679bfc-12): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Dec 10 20:14:17 compute-0 nova_compute[189279]: 2025-12-10 20:14:17.485 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:17 compute-0 ovn_controller[97701]: 2025-12-10T20:14:17Z|00147|binding|INFO|Claiming lport 88679bfc-126b-4704-b224-65b502faa33c for this chassis.
Dec 10 20:14:17 compute-0 ovn_controller[97701]: 2025-12-10T20:14:17Z|00148|binding|INFO|88679bfc-126b-4704-b224-65b502faa33c: Claiming fa:16:3e:60:f8:d3 10.100.0.3
Dec 10 20:14:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:17.493 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:f8:d3 10.100.0.3'], port_security=['fa:16:3e:60:f8:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'a6e19ece-bf39-4c33-bf2a-857b75ae2ca1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4388b363-773a-4716-8c7d-00d02392bfdb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a51cea6d1cb40c383b87a400100e902', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7599f2eb-72eb-4309-86ab-70d46a94e479', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e9ca3af-f428-458c-a5cc-cfb31b816028, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=88679bfc-126b-4704-b224-65b502faa33c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:14:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:17.494 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 88679bfc-126b-4704-b224-65b502faa33c in datapath 4388b363-773a-4716-8c7d-00d02392bfdb bound to our chassis
Dec 10 20:14:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:17.496 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4388b363-773a-4716-8c7d-00d02392bfdb
Dec 10 20:14:17 compute-0 ovn_controller[97701]: 2025-12-10T20:14:17Z|00149|binding|INFO|Setting lport 88679bfc-126b-4704-b224-65b502faa33c ovn-installed in OVS
Dec 10 20:14:17 compute-0 ovn_controller[97701]: 2025-12-10T20:14:17Z|00150|binding|INFO|Setting lport 88679bfc-126b-4704-b224-65b502faa33c up in Southbound
Dec 10 20:14:17 compute-0 nova_compute[189279]: 2025-12-10 20:14:17.502 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:17 compute-0 nova_compute[189279]: 2025-12-10 20:14:17.505 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:17.517 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2601b1-95a0-48d1-ad93-c0f116a0b2b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:17 compute-0 systemd-udevd[250642]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:14:17 compute-0 systemd-machined[155642]: New machine qemu-13-instance-0000000d.
Dec 10 20:14:17 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Dec 10 20:14:17 compute-0 NetworkManager[56238]: <info>  [1765397657.5590] device (tap88679bfc-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 20:14:17 compute-0 NetworkManager[56238]: <info>  [1765397657.5599] device (tap88679bfc-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 20:14:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:17.558 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f3b8fe-be90-464d-8cd8-301b975e0327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:17.565 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[03f7f2d9-67cc-42d5-b419-86845b8afe81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:17.608 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c5f105-4b3f-4ff0-bcc1-999ae2823de8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:17.635 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[7475ae3b-4079-451d-9158-a644f777318b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4388b363-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492582, 'reachable_time': 15439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250652, 'error': None, 'target': 'ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:17.654 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[9c529fbf-004e-4e7d-8b46-9cf7d471b6b1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4388b363-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 492601, 'tstamp': 492601}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250654, 'error': None, 'target': 'ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4388b363-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 492605, 'tstamp': 492605}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250654, 'error': None, 'target': 'ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:17.656 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4388b363-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:17 compute-0 nova_compute[189279]: 2025-12-10 20:14:17.659 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:17 compute-0 nova_compute[189279]: 2025-12-10 20:14:17.662 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:17.664 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4388b363-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:17.665 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:14:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:17.665 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4388b363-70, col_values=(('external_ids', {'iface-id': 'c6649cf0-8544-4fa3-a1cf-44dddb6fbbdc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:17 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:17.666 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.187 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397658.1871161, a6e19ece-bf39-4c33-bf2a-857b75ae2ca1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.188 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] VM Started (Lifecycle Event)
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.213 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.227 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397658.1873808, a6e19ece-bf39-4c33-bf2a-857b75ae2ca1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.228 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] VM Paused (Lifecycle Event)
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.255 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.264 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.289 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.604 189283 DEBUG nova.network.neutron [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Updating instance_info_cache with network_info: [{"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.620 189283 DEBUG oslo_concurrency.lockutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Releasing lock "refresh_cache-63639261-d8d9-46e1-8b3f-55af36a85e58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.622 189283 DEBUG nova.compute.manager [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.752 189283 DEBUG nova.network.neutron [req-16a1ac7c-5ed5-4de4-95fa-45e3919405b7 req-d465b197-6aca-41ff-a7aa-eef7e70c5a56 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Updated VIF entry in instance network info cache for port 88679bfc-126b-4704-b224-65b502faa33c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.753 189283 DEBUG nova.network.neutron [req-16a1ac7c-5ed5-4de4-95fa-45e3919405b7 req-d465b197-6aca-41ff-a7aa-eef7e70c5a56 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Updating instance_info_cache with network_info: [{"id": "88679bfc-126b-4704-b224-65b502faa33c", "address": "fa:16:3e:60:f8:d3", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88679bfc-12", "ovs_interfaceid": "88679bfc-126b-4704-b224-65b502faa33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:14:18 compute-0 kernel: tapa0f4e290-5b (unregistering): left promiscuous mode
Dec 10 20:14:18 compute-0 NetworkManager[56238]: <info>  [1765397658.7732] device (tapa0f4e290-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.775 189283 DEBUG oslo_concurrency.lockutils [req-16a1ac7c-5ed5-4de4-95fa-45e3919405b7 req-d465b197-6aca-41ff-a7aa-eef7e70c5a56 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.794 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:18 compute-0 ovn_controller[97701]: 2025-12-10T20:14:18Z|00151|binding|INFO|Releasing lport a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 from this chassis (sb_readonly=0)
Dec 10 20:14:18 compute-0 ovn_controller[97701]: 2025-12-10T20:14:18Z|00152|binding|INFO|Setting lport a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 down in Southbound
Dec 10 20:14:18 compute-0 ovn_controller[97701]: 2025-12-10T20:14:18Z|00153|binding|INFO|Removing iface tapa0f4e290-5b ovn-installed in OVS
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.799 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:18.824 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:b0:0b 10.100.0.8'], port_security=['fa:16:3e:f8:b0:0b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '63639261-d8d9-46e1-8b3f-55af36a85e58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77ecefb2-de1d-4471-80a0-8f797ab99021', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e63db29894648c7a06ef3bcb4b98768', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e991cb1-ab23-4fa3-b4b6-83b24087f30e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b611bc6-8b69-4351-a79d-b310ec70a551, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:14:18 compute-0 nova_compute[189279]: 2025-12-10 20:14:18.828 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:18.829 106564 INFO neutron.agent.ovn.metadata.agent [-] Port a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 in datapath 77ecefb2-de1d-4471-80a0-8f797ab99021 unbound from our chassis
Dec 10 20:14:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:18.835 106564 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77ecefb2-de1d-4471-80a0-8f797ab99021, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 10 20:14:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:18.837 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[37f79ac7-c710-4755-aa6b-729138a2944f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:18 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:18.838 106564 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021 namespace which is not needed anymore
Dec 10 20:14:18 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec 10 20:14:18 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000007.scope: Consumed 41.829s CPU time.
Dec 10 20:14:18 compute-0 systemd-machined[155642]: Machine qemu-8-instance-00000007 terminated.
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.032 189283 INFO nova.virt.libvirt.driver [-] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Instance destroyed successfully.
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.034 189283 DEBUG nova.objects.instance [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lazy-loading 'resources' on Instance uuid 63639261-d8d9-46e1-8b3f-55af36a85e58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:14:19 compute-0 neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021[249268]: [NOTICE]   (249272) : haproxy version is 2.8.14-c23fe91
Dec 10 20:14:19 compute-0 neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021[249268]: [NOTICE]   (249272) : path to executable is /usr/sbin/haproxy
Dec 10 20:14:19 compute-0 neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021[249268]: [WARNING]  (249272) : Exiting Master process...
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.054 189283 DEBUG nova.virt.libvirt.vif [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:12:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1460650199',display_name='tempest-ServerActionsTestJSON-server-1460650199',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1460650199',id=7,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNMZ4vtRw7tBhuM4o6MjvfbKNBIl4FQd4G6qFZVFfMRp+DuluVXm6EdlnooCaRI1wwhsIBxXE3togl4a//g9wsD+ZeM3HnXvIhtkdJ8sJuoGMY7C3lFqm65C06eytVKJQw==',key_name='tempest-keypair-71097797',keypairs=<?>,launch_index=0,launched_at=2025-12-10T20:13:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2e63db29894648c7a06ef3bcb4b98768',ramdisk_id='',reservation_id='r-1tl971la',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-53104742',owner_user_name='tempest-ServerActionsTestJSON-53104742-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T20:14:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0c9cd4059c654dd4947e252e9f3acf85',uuid=63639261-d8d9-46e1-8b3f-55af36a85e58,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:14:19 compute-0 neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021[249268]: [ALERT]    (249272) : Current worker (249274) exited with code 143 (Terminated)
Dec 10 20:14:19 compute-0 neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021[249268]: [WARNING]  (249272) : All workers exited. Exiting... (0)
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.054 189283 DEBUG nova.network.os_vif_util [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Converting VIF {"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.055 189283 DEBUG nova.network.os_vif_util [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:b0:0b,bridge_name='br-int',has_traffic_filtering=True,id=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1,network=Network(77ecefb2-de1d-4471-80a0-8f797ab99021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f4e290-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.055 189283 DEBUG os_vif [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:b0:0b,bridge_name='br-int',has_traffic_filtering=True,id=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1,network=Network(77ecefb2-de1d-4471-80a0-8f797ab99021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f4e290-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.057 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.057 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0f4e290-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:19 compute-0 systemd[1]: libpod-bb6d306e026d68071dd1a698c695e7cf5b703b552ef3eb5efb9af644eb6c84df.scope: Deactivated successfully.
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.059 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.062 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.066 189283 INFO os_vif [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:b0:0b,bridge_name='br-int',has_traffic_filtering=True,id=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1,network=Network(77ecefb2-de1d-4471-80a0-8f797ab99021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f4e290-5b')
Dec 10 20:14:19 compute-0 podman[250685]: 2025-12-10 20:14:19.070326013 +0000 UTC m=+0.082079279 container died bb6d306e026d68071dd1a698c695e7cf5b703b552ef3eb5efb9af644eb6c84df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.076 189283 DEBUG nova.virt.libvirt.driver [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Start _get_guest_xml network_info=[{"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '33b11153-486b-4d32-bc63-6b6a6ed0b704'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.087 189283 WARNING nova.virt.libvirt.driver [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.097 189283 DEBUG nova.virt.libvirt.host [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.098 189283 DEBUG nova.virt.libvirt.host [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.102 189283 DEBUG nova.virt.libvirt.host [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.105 189283 DEBUG nova.virt.libvirt.host [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.106 189283 DEBUG nova.virt.libvirt.driver [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.106 189283 DEBUG nova.virt.hardware [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T20:11:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.106 189283 DEBUG nova.virt.hardware [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.107 189283 DEBUG nova.virt.hardware [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.107 189283 DEBUG nova.virt.hardware [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.107 189283 DEBUG nova.virt.hardware [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.107 189283 DEBUG nova.virt.hardware [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.108 189283 DEBUG nova.virt.hardware [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.108 189283 DEBUG nova.virt.hardware [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.108 189283 DEBUG nova.virt.hardware [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.108 189283 DEBUG nova.virt.hardware [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.108 189283 DEBUG nova.virt.hardware [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.109 189283 DEBUG nova.objects.instance [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 63639261-d8d9-46e1-8b3f-55af36a85e58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb6d306e026d68071dd1a698c695e7cf5b703b552ef3eb5efb9af644eb6c84df-userdata-shm.mount: Deactivated successfully.
Dec 10 20:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-66006f795d475f944dc9c57ec5cddbcb0f2be2c355de947796f5c9dda7c08028-merged.mount: Deactivated successfully.
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.128 189283 DEBUG oslo_concurrency.processutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:19 compute-0 podman[250685]: 2025-12-10 20:14:19.13717003 +0000 UTC m=+0.148923296 container cleanup bb6d306e026d68071dd1a698c695e7cf5b703b552ef3eb5efb9af644eb6c84df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.151 189283 DEBUG nova.network.neutron [None req-dfffb805-2948-41cd-ae02-8cf2ea1c554d 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Updating instance_info_cache with network_info: [{"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:14:19 compute-0 systemd[1]: libpod-conmon-bb6d306e026d68071dd1a698c695e7cf5b703b552ef3eb5efb9af644eb6c84df.scope: Deactivated successfully.
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.174 189283 DEBUG oslo_concurrency.lockutils [None req-dfffb805-2948-41cd-ae02-8cf2ea1c554d 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Releasing lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.174 189283 DEBUG nova.compute.manager [None req-dfffb805-2948-41cd-ae02-8cf2ea1c554d 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.175 189283 DEBUG nova.compute.manager [None req-dfffb805-2948-41cd-ae02-8cf2ea1c554d 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] network_info to inject: |[{"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.178 189283 DEBUG oslo_concurrency.lockutils [req-bffdadc8-ea72-4a4f-9795-51b7ba5b0cc5 req-229f3916-d38a-472b-94d7-c8c1508198dd 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.178 189283 DEBUG nova.network.neutron [req-bffdadc8-ea72-4a4f-9795-51b7ba5b0cc5 req-229f3916-d38a-472b-94d7-c8c1508198dd 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Refreshing network info cache for port 42ea5f6d-dd00-4169-8385-3b8709530411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.209 189283 DEBUG oslo_concurrency.processutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk.config --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.210 189283 DEBUG oslo_concurrency.lockutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquiring lock "/var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.210 189283 DEBUG oslo_concurrency.lockutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "/var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.211 189283 DEBUG oslo_concurrency.lockutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "/var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.211 189283 DEBUG nova.virt.libvirt.vif [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:12:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1460650199',display_name='tempest-ServerActionsTestJSON-server-1460650199',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1460650199',id=7,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNMZ4vtRw7tBhuM4o6MjvfbKNBIl4FQd4G6qFZVFfMRp+DuluVXm6EdlnooCaRI1wwhsIBxXE3togl4a//g9wsD+ZeM3HnXvIhtkdJ8sJuoGMY7C3lFqm65C06eytVKJQw==',key_name='tempest-keypair-71097797',keypairs=<?>,launch_index=0,launched_at=2025-12-10T20:13:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2e63db29894648c7a06ef3bcb4b98768',ramdisk_id='',reservation_id='r-1tl971la',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-53104742',owner_user_name='tempest-ServerActionsTestJSON-53104742-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T20:14:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0c9cd4059c654dd4947e252e9f3acf85',uuid=63639261-d8d9-46e1-8b3f-55af36a85e58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.212 189283 DEBUG nova.network.os_vif_util [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Converting VIF {"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.213 189283 DEBUG nova.network.os_vif_util [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:b0:0b,bridge_name='br-int',has_traffic_filtering=True,id=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1,network=Network(77ecefb2-de1d-4471-80a0-8f797ab99021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f4e290-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.213 189283 DEBUG nova.objects.instance [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lazy-loading 'pci_devices' on Instance uuid 63639261-d8d9-46e1-8b3f-55af36a85e58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.243 189283 DEBUG nova.virt.libvirt.driver [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] End _get_guest_xml xml=<domain type="kvm">
Dec 10 20:14:19 compute-0 nova_compute[189279]:   <uuid>63639261-d8d9-46e1-8b3f-55af36a85e58</uuid>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   <name>instance-00000007</name>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   <memory>131072</memory>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   <metadata>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <nova:name>tempest-ServerActionsTestJSON-server-1460650199</nova:name>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 20:14:19</nova:creationTime>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <nova:flavor name="m1.nano">
Dec 10 20:14:19 compute-0 nova_compute[189279]:         <nova:memory>128</nova:memory>
Dec 10 20:14:19 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 20:14:19 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 20:14:19 compute-0 nova_compute[189279]:         <nova:ephemeral>0</nova:ephemeral>
Dec 10 20:14:19 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 20:14:19 compute-0 nova_compute[189279]:         <nova:user uuid="0c9cd4059c654dd4947e252e9f3acf85">tempest-ServerActionsTestJSON-53104742-project-member</nova:user>
Dec 10 20:14:19 compute-0 nova_compute[189279]:         <nova:project uuid="2e63db29894648c7a06ef3bcb4b98768">tempest-ServerActionsTestJSON-53104742</nova:project>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="33b11153-486b-4d32-bc63-6b6a6ed0b704"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 20:14:19 compute-0 nova_compute[189279]:         <nova:port uuid="a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1">
Dec 10 20:14:19 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   </metadata>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <system>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <entry name="serial">63639261-d8d9-46e1-8b3f-55af36a85e58</entry>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <entry name="uuid">63639261-d8d9-46e1-8b3f-55af36a85e58</entry>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     </system>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   <os>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   </os>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   <features>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <apic/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   </features>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   </clock>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   </cpu>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   <devices>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk.config"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:f8:b0:0b"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <target dev="tapa0f4e290-5b"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     </interface>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/console.log" append="off"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     </serial>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <video>
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     </video>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <input type="keyboard" bus="usb"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     </rng>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 20:14:19 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 20:14:19 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 20:14:19 compute-0 nova_compute[189279]:   </devices>
Dec 10 20:14:19 compute-0 nova_compute[189279]: </domain>
Dec 10 20:14:19 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.244 189283 DEBUG oslo_concurrency.processutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:19 compute-0 podman[250726]: 2025-12-10 20:14:19.253057943 +0000 UTC m=+0.073112728 container remove bb6d306e026d68071dd1a698c695e7cf5b703b552ef3eb5efb9af644eb6c84df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.264 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[2240d273-b59b-4170-b80b-91338565b3ab]: (4, ('Wed Dec 10 08:14:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021 (bb6d306e026d68071dd1a698c695e7cf5b703b552ef3eb5efb9af644eb6c84df)\nbb6d306e026d68071dd1a698c695e7cf5b703b552ef3eb5efb9af644eb6c84df\nWed Dec 10 08:14:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021 (bb6d306e026d68071dd1a698c695e7cf5b703b552ef3eb5efb9af644eb6c84df)\nbb6d306e026d68071dd1a698c695e7cf5b703b552ef3eb5efb9af644eb6c84df\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.267 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[2de57efe-b06c-43b8-ba45-dc7a0c1a99f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.268 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77ecefb2-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:19 compute-0 kernel: tap77ecefb2-d0: left promiscuous mode
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.270 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.278 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[717bc059-1a47-46d8-a59b-3d8ef47a4cf9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.284 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.294 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[06f031c9-9b6f-46ad-95df-156b7b08ef5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.296 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7cbe83-8c3d-45a7-8dae-b31a08f04a95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.317 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[21464cfd-a03c-432e-8628-dc1e82df3c77]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490757, 'reachable_time': 15488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250741, 'error': None, 'target': 'ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d77ecefb2\x2dde1d\x2d4471\x2d80a0\x2d8f797ab99021.mount: Deactivated successfully.
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.320 106676 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.321 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[cfecf060-3c48-41a5-b5dc-37b3ed66dac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.325 189283 DEBUG oslo_concurrency.processutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.327 189283 DEBUG oslo_concurrency.processutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.394 189283 DEBUG oslo_concurrency.processutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.398 189283 DEBUG nova.objects.instance [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 63639261-d8d9-46e1-8b3f-55af36a85e58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.434 189283 DEBUG oslo_concurrency.processutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.493 189283 DEBUG oslo_concurrency.processutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.495 189283 DEBUG nova.virt.disk.api [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Checking if we can resize image /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.495 189283 DEBUG oslo_concurrency.processutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.564 189283 DEBUG oslo_concurrency.processutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.566 189283 DEBUG nova.virt.disk.api [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Cannot resize image /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.566 189283 DEBUG nova.objects.instance [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lazy-loading 'migration_context' on Instance uuid 63639261-d8d9-46e1-8b3f-55af36a85e58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.588 189283 DEBUG nova.virt.libvirt.vif [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:12:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1460650199',display_name='tempest-ServerActionsTestJSON-server-1460650199',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1460650199',id=7,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNMZ4vtRw7tBhuM4o6MjvfbKNBIl4FQd4G6qFZVFfMRp+DuluVXm6EdlnooCaRI1wwhsIBxXE3togl4a//g9wsD+ZeM3HnXvIhtkdJ8sJuoGMY7C3lFqm65C06eytVKJQw==',key_name='tempest-keypair-71097797',keypairs=<?>,launch_index=0,launched_at=2025-12-10T20:13:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='2e63db29894648c7a06ef3bcb4b98768',ramdisk_id='',reservation_id='r-1tl971la',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-53104742',owner_user_name='tempest-ServerActionsTestJSON-53104742-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:14:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0c9cd4059c654dd4947e252e9f3acf85',uuid=63639261-d8d9-46e1-8b3f-55af36a85e58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.589 189283 DEBUG nova.network.os_vif_util [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Converting VIF {"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.590 189283 DEBUG nova.network.os_vif_util [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:b0:0b,bridge_name='br-int',has_traffic_filtering=True,id=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1,network=Network(77ecefb2-de1d-4471-80a0-8f797ab99021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f4e290-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.590 189283 DEBUG os_vif [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:b0:0b,bridge_name='br-int',has_traffic_filtering=True,id=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1,network=Network(77ecefb2-de1d-4471-80a0-8f797ab99021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f4e290-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.590 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.591 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.591 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.597 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.598 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0f4e290-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.598 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0f4e290-5b, col_values=(('external_ids', {'iface-id': 'a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:b0:0b', 'vm-uuid': '63639261-d8d9-46e1-8b3f-55af36a85e58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.600 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:19 compute-0 NetworkManager[56238]: <info>  [1765397659.6014] manager: (tapa0f4e290-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.603 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.607 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.608 189283 INFO os_vif [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:b0:0b,bridge_name='br-int',has_traffic_filtering=True,id=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1,network=Network(77ecefb2-de1d-4471-80a0-8f797ab99021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f4e290-5b')
Dec 10 20:14:19 compute-0 kernel: tapa0f4e290-5b: entered promiscuous mode
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.696 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:19 compute-0 ovn_controller[97701]: 2025-12-10T20:14:19Z|00154|binding|INFO|Claiming lport a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 for this chassis.
Dec 10 20:14:19 compute-0 ovn_controller[97701]: 2025-12-10T20:14:19Z|00155|binding|INFO|a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1: Claiming fa:16:3e:f8:b0:0b 10.100.0.8
Dec 10 20:14:19 compute-0 NetworkManager[56238]: <info>  [1765397659.6991] manager: (tapa0f4e290-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Dec 10 20:14:19 compute-0 NetworkManager[56238]: <info>  [1765397659.7091] device (tapa0f4e290-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 20:14:19 compute-0 NetworkManager[56238]: <info>  [1765397659.7096] device (tapa0f4e290-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.712 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:b0:0b 10.100.0.8'], port_security=['fa:16:3e:f8:b0:0b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '63639261-d8d9-46e1-8b3f-55af36a85e58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77ecefb2-de1d-4471-80a0-8f797ab99021', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e63db29894648c7a06ef3bcb4b98768', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e991cb1-ab23-4fa3-b4b6-83b24087f30e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b611bc6-8b69-4351-a79d-b310ec70a551, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.713 106564 INFO neutron.agent.ovn.metadata.agent [-] Port a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 in datapath 77ecefb2-de1d-4471-80a0-8f797ab99021 bound to our chassis
Dec 10 20:14:19 compute-0 ovn_controller[97701]: 2025-12-10T20:14:19Z|00156|binding|INFO|Setting lport a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 ovn-installed in OVS
Dec 10 20:14:19 compute-0 ovn_controller[97701]: 2025-12-10T20:14:19Z|00157|binding|INFO|Setting lport a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 up in Southbound
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.715 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77ecefb2-de1d-4471-80a0-8f797ab99021
Dec 10 20:14:19 compute-0 nova_compute[189279]: 2025-12-10 20:14:19.717 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.728 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3cc1cc-9ef9-4e46-b767-e7ecb91b4954]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.729 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77ecefb2-d1 in ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.733 239384 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77ecefb2-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.733 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[8240ac9b-90fc-4d05-a81f-0193e7e68452]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.735 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8d923b-b0a6-4e9b-85ae-46d2a69869bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 systemd-machined[155642]: New machine qemu-14-instance-00000007.
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.750 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[21f712e8-3fc1-4978-9513-a0fb8e4dca0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-00000007.
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.785 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[ad291d80-63c3-4309-b3ee-5c5fd1f59709]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.835 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[4372068c-0048-4436-94ae-9c69f99b97bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 NetworkManager[56238]: <info>  [1765397659.8495] manager: (tap77ecefb2-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.851 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd279cd-cd07-4cc1-9612-866a2d44b571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.892 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[aa237ddf-f1d8-44a1-8c28-ff7024e8394f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.896 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c345f7-cbe3-4173-be91-51fe99b380ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 NetworkManager[56238]: <info>  [1765397659.9243] device (tap77ecefb2-d0): carrier: link connected
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.931 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[0537fdcd-3b70-40f0-a6b0-faf7f176cf69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.955 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4468f2-1ca5-4681-87af-57c00bb81478]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77ecefb2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:30:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498858, 'reachable_time': 26914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250801, 'error': None, 'target': 'ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:19 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:19.981 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[909586ca-2dc7-4cfe-a616-67197e21fc84]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:306b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498858, 'tstamp': 498858}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250802, 'error': None, 'target': 'ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:20.002 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[ecaa5c63-8323-4e5e-bb33-c1488cb0e5fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77ecefb2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:30:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498858, 'reachable_time': 26914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250803, 'error': None, 'target': 'ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:20.055 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[1be8305e-61c8-4f06-a1be-8e196aa7c273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:20.140 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[91e357dc-d40e-46d4-af47-960c3fabc92c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:20.142 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77ecefb2-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:20.142 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:20.142 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77ecefb2-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.145 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:20 compute-0 NetworkManager[56238]: <info>  [1765397660.1457] manager: (tap77ecefb2-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Dec 10 20:14:20 compute-0 kernel: tap77ecefb2-d0: entered promiscuous mode
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.153 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:20.155 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77ecefb2-d0, col_values=(('external_ids', {'iface-id': '2f9d87e3-f102-4fe2-b4d5-b25a5d31091b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.159 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:20 compute-0 ovn_controller[97701]: 2025-12-10T20:14:20Z|00158|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.177 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:20.179 106564 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77ecefb2-de1d-4471-80a0-8f797ab99021.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77ecefb2-de1d-4471-80a0-8f797ab99021.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:20.180 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[a0443dd5-6879-4583-9321-a45fb708f2b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:20.181 106564 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: global
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     log         /dev/log local0 debug
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     log-tag     haproxy-metadata-proxy-77ecefb2-de1d-4471-80a0-8f797ab99021
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     user        root
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     group       root
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     maxconn     1024
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     pidfile     /var/lib/neutron/external/pids/77ecefb2-de1d-4471-80a0-8f797ab99021.pid.haproxy
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     daemon
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: defaults
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     log global
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     mode http
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     option httplog
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     option dontlognull
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     option http-server-close
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     option forwardfor
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     retries                 3
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     timeout http-request    30s
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     timeout connect         30s
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     timeout client          32s
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     timeout server          32s
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     timeout http-keep-alive 30s
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: listen listener
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     bind 169.254.169.254:80
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     server metadata /var/lib/neutron/metadata_proxy
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:     http-request add-header X-OVN-Network-ID 77ecefb2-de1d-4471-80a0-8f797ab99021
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 10 20:14:20 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:20.182 106564 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021', 'env', 'PROCESS_TAG=haproxy-77ecefb2-de1d-4471-80a0-8f797ab99021', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77ecefb2-de1d-4471-80a0-8f797ab99021.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.412 189283 DEBUG nova.virt.libvirt.host [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Removed pending event for 63639261-d8d9-46e1-8b3f-55af36a85e58 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.413 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397660.4115138, 63639261-d8d9-46e1-8b3f-55af36a85e58 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.413 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] VM Resumed (Lifecycle Event)
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.416 189283 DEBUG nova.compute.manager [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.422 189283 INFO nova.virt.libvirt.driver [-] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Instance rebooted successfully.
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.423 189283 DEBUG nova.compute.manager [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.463 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.472 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.485 189283 DEBUG oslo_concurrency.lockutils [None req-b32e1a3f-6203-46d1-b0b4-931de6e459ca 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 3.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.492 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397660.4130566, 63639261-d8d9-46e1-8b3f-55af36a85e58 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.492 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] VM Started (Lifecycle Event)
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.513 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.520 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:14:20 compute-0 podman[250841]: 2025-12-10 20:14:20.606825734 +0000 UTC m=+0.074932137 container create 5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 10 20:14:20 compute-0 systemd[1]: Started libpod-conmon-5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b.scope.
Dec 10 20:14:20 compute-0 podman[250841]: 2025-12-10 20:14:20.568542829 +0000 UTC m=+0.036649282 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 10 20:14:20 compute-0 systemd[1]: Started libcrun container.
Dec 10 20:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2faa317b93ad0ba74718b303c9a230aff25eac25eb0f7626d890043d4f64e876/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 10 20:14:20 compute-0 podman[250841]: 2025-12-10 20:14:20.715511101 +0000 UTC m=+0.183617514 container init 5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 10 20:14:20 compute-0 podman[250841]: 2025-12-10 20:14:20.722769757 +0000 UTC m=+0.190876160 container start 5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.744 189283 DEBUG nova.objects.instance [None req-8c1ae9fb-c182-4ce3-8946-dd5a820b9214 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lazy-loading 'flavor' on Instance uuid 81f60881-4334-4ede-a10d-454a7e8a4154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:14:20 compute-0 neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021[250856]: [NOTICE]   (250860) : New worker (250862) forked
Dec 10 20:14:20 compute-0 neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021[250856]: [NOTICE]   (250860) : Loading success.
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.777 189283 DEBUG oslo_concurrency.lockutils [None req-8c1ae9fb-c182-4ce3-8946-dd5a820b9214 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquiring lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.972 189283 DEBUG nova.network.neutron [req-bffdadc8-ea72-4a4f-9795-51b7ba5b0cc5 req-229f3916-d38a-472b-94d7-c8c1508198dd 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Updated VIF entry in instance network info cache for port 42ea5f6d-dd00-4169-8385-3b8709530411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:14:20 compute-0 nova_compute[189279]: 2025-12-10 20:14:20.973 189283 DEBUG nova.network.neutron [req-bffdadc8-ea72-4a4f-9795-51b7ba5b0cc5 req-229f3916-d38a-472b-94d7-c8c1508198dd 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Updating instance_info_cache with network_info: [{"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.007 189283 DEBUG oslo_concurrency.lockutils [req-bffdadc8-ea72-4a4f-9795-51b7ba5b0cc5 req-229f3916-d38a-472b-94d7-c8c1508198dd 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.008 189283 DEBUG oslo_concurrency.lockutils [None req-8c1ae9fb-c182-4ce3-8946-dd5a820b9214 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquired lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.444 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "ca7daa1b-94a2-4e08-902b-73be0ab83974" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.445 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.453 189283 DEBUG nova.compute.manager [req-7b29640f-ca8f-4ed1-880d-de11ff93ecda req-e0fc3f90-1fbe-4ed0-aca2-9e3191aa8b89 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Received event network-vif-plugged-88679bfc-126b-4704-b224-65b502faa33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.454 189283 DEBUG oslo_concurrency.lockutils [req-7b29640f-ca8f-4ed1-880d-de11ff93ecda req-e0fc3f90-1fbe-4ed0-aca2-9e3191aa8b89 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.454 189283 DEBUG oslo_concurrency.lockutils [req-7b29640f-ca8f-4ed1-880d-de11ff93ecda req-e0fc3f90-1fbe-4ed0-aca2-9e3191aa8b89 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.454 189283 DEBUG oslo_concurrency.lockutils [req-7b29640f-ca8f-4ed1-880d-de11ff93ecda req-e0fc3f90-1fbe-4ed0-aca2-9e3191aa8b89 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.455 189283 DEBUG nova.compute.manager [req-7b29640f-ca8f-4ed1-880d-de11ff93ecda req-e0fc3f90-1fbe-4ed0-aca2-9e3191aa8b89 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Processing event network-vif-plugged-88679bfc-126b-4704-b224-65b502faa33c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.455 189283 DEBUG nova.compute.manager [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.460 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397661.4603708, a6e19ece-bf39-4c33-bf2a-857b75ae2ca1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.461 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] VM Resumed (Lifecycle Event)
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.463 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.469 189283 INFO nova.virt.libvirt.driver [-] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Instance spawned successfully.
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.469 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.473 189283 DEBUG nova.compute.manager [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.481 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.486 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.521 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.522 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.523 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.523 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.523 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.524 189283 DEBUG nova.virt.libvirt.driver [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.557 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.580 189283 INFO nova.compute.manager [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Took 11.36 seconds to spawn the instance on the hypervisor.
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.581 189283 DEBUG nova.compute.manager [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.582 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.583 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.592 189283 DEBUG nova.virt.hardware [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.593 189283 INFO nova.compute.claims [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Claim successful on node compute-0.ctlplane.example.com
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.657 189283 INFO nova.compute.manager [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Took 11.90 seconds to build instance.
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.685 189283 DEBUG oslo_concurrency.lockutils [None req-53085e2b-e5ab-4835-8f57-c5ec178aa2af 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.699 189283 DEBUG nova.scheduler.client.report [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Refreshing inventories for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.724 189283 DEBUG nova.scheduler.client.report [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Updating ProviderTree inventory for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.724 189283 DEBUG nova.compute.provider_tree [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Updating inventory in ProviderTree for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.743 189283 DEBUG nova.scheduler.client.report [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Refreshing aggregate associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.766 189283 DEBUG nova.scheduler.client.report [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Refreshing trait associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, traits: COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.886 189283 DEBUG nova.compute.provider_tree [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.901 189283 DEBUG nova.scheduler.client.report [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.925 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.926 189283 DEBUG nova.compute.manager [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.978 189283 DEBUG nova.compute.manager [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.979 189283 DEBUG nova.network.neutron [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 20:14:21 compute-0 nova_compute[189279]: 2025-12-10 20:14:21.995 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:22 compute-0 nova_compute[189279]: 2025-12-10 20:14:22.005 189283 INFO nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 20:14:22 compute-0 nova_compute[189279]: 2025-12-10 20:14:22.031 189283 DEBUG nova.compute.manager [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 20:14:22 compute-0 nova_compute[189279]: 2025-12-10 20:14:22.142 189283 DEBUG nova.compute.manager [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 20:14:22 compute-0 nova_compute[189279]: 2025-12-10 20:14:22.145 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 20:14:22 compute-0 nova_compute[189279]: 2025-12-10 20:14:22.146 189283 INFO nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Creating image(s)
Dec 10 20:14:22 compute-0 nova_compute[189279]: 2025-12-10 20:14:22.147 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "/var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:22 compute-0 nova_compute[189279]: 2025-12-10 20:14:22.148 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "/var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:22 compute-0 nova_compute[189279]: 2025-12-10 20:14:22.150 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "/var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:22 compute-0 nova_compute[189279]: 2025-12-10 20:14:22.152 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "53f56b563801b5ea0f834b33920c5e6aa39aeede" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:22 compute-0 nova_compute[189279]: 2025-12-10 20:14:22.153 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "53f56b563801b5ea0f834b33920c5e6aa39aeede" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:22 compute-0 nova_compute[189279]: 2025-12-10 20:14:22.229 189283 DEBUG nova.network.neutron [None req-8c1ae9fb-c182-4ce3-8946-dd5a820b9214 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:14:22 compute-0 nova_compute[189279]: 2025-12-10 20:14:22.266 189283 DEBUG nova.policy [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '639468767e8f48a1bd0e3dac90a0ec47', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e773c65970c34c9db154c6fea65d9fa4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 10 20:14:23 compute-0 podman[250873]: 2025-12-10 20:14:23.12249158 +0000 UTC m=+0.088597276 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:14:23 compute-0 podman[250872]: 2025-12-10 20:14:23.138722709 +0000 UTC m=+0.099248604 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.342 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:23.397 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:23.397 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:23.398 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.429 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede.part --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.431 189283 DEBUG nova.virt.images [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] ab2dea70-7375-4e2d-beda-90f19a5ec15e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.432 189283 DEBUG nova.privsep.utils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.433 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede.part /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.575 189283 DEBUG nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Received event network-vif-plugged-88679bfc-126b-4704-b224-65b502faa33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.577 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.577 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.578 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.579 189283 DEBUG nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] No waiting events found dispatching network-vif-plugged-88679bfc-126b-4704-b224-65b502faa33c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.579 189283 WARNING nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Received unexpected event network-vif-plugged-88679bfc-126b-4704-b224-65b502faa33c for instance with vm_state active and task_state None.
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.580 189283 DEBUG nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received event network-vif-unplugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.581 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.581 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.582 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.583 189283 DEBUG nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] No waiting events found dispatching network-vif-unplugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.583 189283 WARNING nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received unexpected event network-vif-unplugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 for instance with vm_state active and task_state None.
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.584 189283 DEBUG nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Received event network-changed-42ea5f6d-dd00-4169-8385-3b8709530411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.586 189283 DEBUG nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Refreshing instance network info cache due to event network-changed-42ea5f6d-dd00-4169-8385-3b8709530411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.587 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.618 189283 DEBUG nova.network.neutron [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Successfully created port: 809bdeda-a71c-4370-a746-873e31aa580c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.678 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede.part /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede.converted" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.685 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.765 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede.converted --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.768 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "53f56b563801b5ea0f834b33920c5e6aa39aeede" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.782 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.849 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.852 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "53f56b563801b5ea0f834b33920c5e6aa39aeede" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.854 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "53f56b563801b5ea0f834b33920c5e6aa39aeede" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.876 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.943 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:23 compute-0 nova_compute[189279]: 2025-12-10 20:14:23.947 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede,backing_fmt=raw /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.011 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede,backing_fmt=raw /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk 1073741824" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.013 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "53f56b563801b5ea0f834b33920c5e6aa39aeede" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.013 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.092 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.094 189283 DEBUG nova.virt.disk.api [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Checking if we can resize image /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.095 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.129 189283 DEBUG nova.network.neutron [None req-8c1ae9fb-c182-4ce3-8946-dd5a820b9214 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Updating instance_info_cache with network_info: [{"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.152 189283 DEBUG oslo_concurrency.lockutils [None req-8c1ae9fb-c182-4ce3-8946-dd5a820b9214 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Releasing lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.153 189283 DEBUG nova.compute.manager [None req-8c1ae9fb-c182-4ce3-8946-dd5a820b9214 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.154 189283 DEBUG nova.compute.manager [None req-8c1ae9fb-c182-4ce3-8946-dd5a820b9214 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] network_info to inject: |[{"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.156 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.157 189283 DEBUG nova.network.neutron [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Refreshing network info cache for port 42ea5f6d-dd00-4169-8385-3b8709530411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.161 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.162 189283 DEBUG nova.virt.disk.api [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Cannot resize image /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.162 189283 DEBUG nova.objects.instance [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lazy-loading 'migration_context' on Instance uuid ca7daa1b-94a2-4e08-902b-73be0ab83974 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.202 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.206 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Ensure instance console log exists: /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.211 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.212 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.212 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.602 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.988 189283 DEBUG nova.compute.manager [req-c2f08ced-a0fa-43ae-9f24-84b9467647c5 req-789da192-226e-4073-8df6-b2711b515b07 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Received event network-changed-88679bfc-126b-4704-b224-65b502faa33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.991 189283 DEBUG nova.compute.manager [req-c2f08ced-a0fa-43ae-9f24-84b9467647c5 req-789da192-226e-4073-8df6-b2711b515b07 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Refreshing instance network info cache due to event network-changed-88679bfc-126b-4704-b224-65b502faa33c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.993 189283 DEBUG oslo_concurrency.lockutils [req-c2f08ced-a0fa-43ae-9f24-84b9467647c5 req-789da192-226e-4073-8df6-b2711b515b07 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.994 189283 DEBUG oslo_concurrency.lockutils [req-c2f08ced-a0fa-43ae-9f24-84b9467647c5 req-789da192-226e-4073-8df6-b2711b515b07 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:14:24 compute-0 nova_compute[189279]: 2025-12-10 20:14:24.995 189283 DEBUG nova.network.neutron [req-c2f08ced-a0fa-43ae-9f24-84b9467647c5 req-789da192-226e-4073-8df6-b2711b515b07 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Refreshing network info cache for port 88679bfc-126b-4704-b224-65b502faa33c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.263 189283 DEBUG nova.network.neutron [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Successfully updated port: 809bdeda-a71c-4370-a746-873e31aa580c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.280 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.282 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquired lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.282 189283 DEBUG nova.network.neutron [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.309 189283 DEBUG oslo_concurrency.lockutils [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquiring lock "81f60881-4334-4ede-a10d-454a7e8a4154" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.310 189283 DEBUG oslo_concurrency.lockutils [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "81f60881-4334-4ede-a10d-454a7e8a4154" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.311 189283 DEBUG oslo_concurrency.lockutils [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquiring lock "81f60881-4334-4ede-a10d-454a7e8a4154-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.311 189283 DEBUG oslo_concurrency.lockutils [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "81f60881-4334-4ede-a10d-454a7e8a4154-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.312 189283 DEBUG oslo_concurrency.lockutils [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "81f60881-4334-4ede-a10d-454a7e8a4154-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.313 189283 INFO nova.compute.manager [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Terminating instance
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.315 189283 DEBUG nova.compute.manager [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:14:25 compute-0 kernel: tap42ea5f6d-dd (unregistering): left promiscuous mode
Dec 10 20:14:25 compute-0 NetworkManager[56238]: <info>  [1765397665.3696] device (tap42ea5f6d-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.382 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:25 compute-0 ovn_controller[97701]: 2025-12-10T20:14:25Z|00159|binding|INFO|Releasing lport 42ea5f6d-dd00-4169-8385-3b8709530411 from this chassis (sb_readonly=0)
Dec 10 20:14:25 compute-0 ovn_controller[97701]: 2025-12-10T20:14:25Z|00160|binding|INFO|Setting lport 42ea5f6d-dd00-4169-8385-3b8709530411 down in Southbound
Dec 10 20:14:25 compute-0 ovn_controller[97701]: 2025-12-10T20:14:25Z|00161|binding|INFO|Removing iface tap42ea5f6d-dd ovn-installed in OVS
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.401 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:25.400 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:c2:44 10.100.0.11'], port_security=['fa:16:3e:cb:c2:44 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '81f60881-4334-4ede-a10d-454a7e8a4154', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92918959-6e40-4a1a-9c11-463c49c96b2f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2505343710a74a61bea5fcb849a4b61b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1fbcc347-f372-4bb1-a6b2-48981642c44d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2a3fe1a-c75e-4977-a15b-b5bec4793c6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=42ea5f6d-dd00-4169-8385-3b8709530411) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:14:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:25.412 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 42ea5f6d-dd00-4169-8385-3b8709530411 in datapath 92918959-6e40-4a1a-9c11-463c49c96b2f unbound from our chassis
Dec 10 20:14:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:25.415 106564 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 92918959-6e40-4a1a-9c11-463c49c96b2f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 10 20:14:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:25.417 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[965a832d-919c-496a-a874-97338c8e1523]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:25.418 106564 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f namespace which is not needed anymore
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.419 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:25 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec 10 20:14:25 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 42.652s CPU time.
Dec 10 20:14:25 compute-0 systemd-machined[155642]: Machine qemu-9-instance-00000009 terminated.
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.549 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.560 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:25 compute-0 podman[250946]: 2025-12-10 20:14:25.594845476 +0000 UTC m=+0.176351638 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 10 20:14:25 compute-0 neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f[249388]: [NOTICE]   (249392) : haproxy version is 2.8.14-c23fe91
Dec 10 20:14:25 compute-0 neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f[249388]: [NOTICE]   (249392) : path to executable is /usr/sbin/haproxy
Dec 10 20:14:25 compute-0 neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f[249388]: [WARNING]  (249392) : Exiting Master process...
Dec 10 20:14:25 compute-0 neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f[249388]: [ALERT]    (249392) : Current worker (249394) exited with code 143 (Terminated)
Dec 10 20:14:25 compute-0 neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f[249388]: [WARNING]  (249392) : All workers exited. Exiting... (0)
Dec 10 20:14:25 compute-0 systemd[1]: libpod-6f05fcf79b2785aadec72b3928b040b55100f4cf537ad0722e3368dad9ab2119.scope: Deactivated successfully.
Dec 10 20:14:25 compute-0 podman[250993]: 2025-12-10 20:14:25.629278787 +0000 UTC m=+0.065965684 container died 6f05fcf79b2785aadec72b3928b040b55100f4cf537ad0722e3368dad9ab2119 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.630 189283 INFO nova.virt.libvirt.driver [-] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Instance destroyed successfully.
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.631 189283 DEBUG nova.objects.instance [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lazy-loading 'resources' on Instance uuid 81f60881-4334-4ede-a10d-454a7e8a4154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.656 189283 DEBUG nova.virt.libvirt.vif [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:12:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-626488523',display_name='tempest-AttachInterfacesUnderV243Test-server-626488523',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-626488523',id=9,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBP3hnXJJItrqC2tE+2StxWPo5v8r+cO2041o4z57viHydodhBc3A1F11lyuNnqZZJ0DkYUm7DSnNyDti0OpCRBDZ4I0oFVP9621ZbNz9EpBGBi3KR2K8iEQ9nH1cIH7JA==',key_name='tempest-keypair-945515570',keypairs=<?>,launch_index=0,launched_at=2025-12-10T20:13:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2505343710a74a61bea5fcb849a4b61b',ramdisk_id='',reservation_id='r-w316cjwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-663599908',owner_user_name='tempest-AttachInterfacesUnderV243Test-663599908-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T20:14:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9901235a2b1b4cf4b7a0d6fd53dd0396',uuid=81f60881-4334-4ede-a10d-454a7e8a4154,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.657 189283 DEBUG nova.network.os_vif_util [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Converting VIF {"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.658 189283 DEBUG nova.network.os_vif_util [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:c2:44,bridge_name='br-int',has_traffic_filtering=True,id=42ea5f6d-dd00-4169-8385-3b8709530411,network=Network(92918959-6e40-4a1a-9c11-463c49c96b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42ea5f6d-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.658 189283 DEBUG os_vif [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:c2:44,bridge_name='br-int',has_traffic_filtering=True,id=42ea5f6d-dd00-4169-8385-3b8709530411,network=Network(92918959-6e40-4a1a-9c11-463c49c96b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42ea5f6d-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.660 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.661 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap42ea5f6d-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.666 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.668 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.670 189283 INFO os_vif [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:c2:44,bridge_name='br-int',has_traffic_filtering=True,id=42ea5f6d-dd00-4169-8385-3b8709530411,network=Network(92918959-6e40-4a1a-9c11-463c49c96b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42ea5f6d-dd')
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.671 189283 INFO nova.virt.libvirt.driver [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Deleting instance files /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154_del
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.672 189283 INFO nova.virt.libvirt.driver [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Deletion of /var/lib/nova/instances/81f60881-4334-4ede-a10d-454a7e8a4154_del complete
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.730 189283 INFO nova.compute.manager [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Took 0.41 seconds to destroy the instance on the hypervisor.
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.730 189283 DEBUG oslo.service.loopingcall [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.731 189283 DEBUG nova.compute.manager [-] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.731 189283 DEBUG nova.network.neutron [-] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:14:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f05fcf79b2785aadec72b3928b040b55100f4cf537ad0722e3368dad9ab2119-userdata-shm.mount: Deactivated successfully.
Dec 10 20:14:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-69e5a88394f091679c82615238771c3ec34710baf052ad094bf630954ab9e627-merged.mount: Deactivated successfully.
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.787 189283 DEBUG nova.network.neutron [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 20:14:25 compute-0 podman[250993]: 2025-12-10 20:14:25.795949732 +0000 UTC m=+0.232636629 container cleanup 6f05fcf79b2785aadec72b3928b040b55100f4cf537ad0722e3368dad9ab2119 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 10 20:14:25 compute-0 systemd[1]: libpod-conmon-6f05fcf79b2785aadec72b3928b040b55100f4cf537ad0722e3368dad9ab2119.scope: Deactivated successfully.
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.858 189283 DEBUG nova.compute.manager [req-e037e7a1-b665-459e-8b45-0496acaa022c req-54f0905d-4d5d-46aa-a916-71b8f38fbe20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Received event network-changed-809bdeda-a71c-4370-a746-873e31aa580c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.859 189283 DEBUG nova.compute.manager [req-e037e7a1-b665-459e-8b45-0496acaa022c req-54f0905d-4d5d-46aa-a916-71b8f38fbe20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Refreshing instance network info cache due to event network-changed-809bdeda-a71c-4370-a746-873e31aa580c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.859 189283 DEBUG oslo_concurrency.lockutils [req-e037e7a1-b665-459e-8b45-0496acaa022c req-54f0905d-4d5d-46aa-a916-71b8f38fbe20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:14:25 compute-0 podman[251037]: 2025-12-10 20:14:25.936081249 +0000 UTC m=+0.086659663 container remove 6f05fcf79b2785aadec72b3928b040b55100f4cf537ad0722e3368dad9ab2119 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 10 20:14:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:25.949 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[606d3ff5-4f7a-4466-a2c3-054fc8352fb9]: (4, ('Wed Dec 10 08:14:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f (6f05fcf79b2785aadec72b3928b040b55100f4cf537ad0722e3368dad9ab2119)\n6f05fcf79b2785aadec72b3928b040b55100f4cf537ad0722e3368dad9ab2119\nWed Dec 10 08:14:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f (6f05fcf79b2785aadec72b3928b040b55100f4cf537ad0722e3368dad9ab2119)\n6f05fcf79b2785aadec72b3928b040b55100f4cf537ad0722e3368dad9ab2119\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:25.952 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[4df21166-0914-4756-b411-7041c8d66ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:25.954 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92918959-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.956 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:25 compute-0 kernel: tap92918959-60: left promiscuous mode
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.960 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:25.974 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[a545afd5-1191-42b3-a81c-8e43fe5d559d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:25 compute-0 nova_compute[189279]: 2025-12-10 20:14:25.989 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:26 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:26.000 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[753ab207-5271-4a45-a31b-03d13e244706]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:26 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:26.001 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c92a82e2-9d5d-43a7-a731-2f6e80558676]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:26 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:26.028 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f8e484-58ca-440e-a424-647d2f3f364f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491032, 'reachable_time': 20054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251051, 'error': None, 'target': 'ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d92918959\x2d6e40\x2d4a1a\x2d9c11\x2d463c49c96b2f.mount: Deactivated successfully.
Dec 10 20:14:26 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:26.035 106676 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-92918959-6e40-4a1a-9c11-463c49c96b2f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 10 20:14:26 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:26.035 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[288d776a-3f61-4a48-b963-a184241f8005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.502 189283 DEBUG nova.network.neutron [-] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.524 189283 INFO nova.compute.manager [-] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Took 0.79 seconds to deallocate network for instance.
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.581 189283 DEBUG oslo_concurrency.lockutils [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.582 189283 DEBUG oslo_concurrency.lockutils [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.632 189283 DEBUG nova.network.neutron [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Updated VIF entry in instance network info cache for port 42ea5f6d-dd00-4169-8385-3b8709530411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.633 189283 DEBUG nova.network.neutron [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Updating instance_info_cache with network_info: [{"id": "42ea5f6d-dd00-4169-8385-3b8709530411", "address": "fa:16:3e:cb:c2:44", "network": {"id": "92918959-6e40-4a1a-9c11-463c49c96b2f", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1692550304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2505343710a74a61bea5fcb849a4b61b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ea5f6d-dd", "ovs_interfaceid": "42ea5f6d-dd00-4169-8385-3b8709530411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.663 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-81f60881-4334-4ede-a10d-454a7e8a4154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.663 189283 DEBUG nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received event network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.664 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.664 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.664 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.664 189283 DEBUG nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] No waiting events found dispatching network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.664 189283 WARNING nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received unexpected event network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 for instance with vm_state active and task_state None.
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.664 189283 DEBUG nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received event network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.665 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.665 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.665 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.665 189283 DEBUG nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] No waiting events found dispatching network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.666 189283 WARNING nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received unexpected event network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 for instance with vm_state active and task_state None.
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.666 189283 DEBUG nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received event network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.666 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.666 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.666 189283 DEBUG oslo_concurrency.lockutils [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.666 189283 DEBUG nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] No waiting events found dispatching network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.666 189283 WARNING nova.compute.manager [req-eae5975e-63c8-41d4-8840-752240149ce0 req-7abda572-34fb-4b96-9489-e0cccab99e55 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received unexpected event network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 for instance with vm_state active and task_state None.
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.754 189283 DEBUG nova.compute.provider_tree [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.773 189283 DEBUG nova.scheduler.client.report [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.808 189283 DEBUG oslo_concurrency.lockutils [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.836 189283 INFO nova.scheduler.client.report [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Deleted allocations for instance 81f60881-4334-4ede-a10d-454a7e8a4154
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.922 189283 DEBUG oslo_concurrency.lockutils [None req-e8533a59-dd09-4897-b3bf-edb29134b41e 9901235a2b1b4cf4b7a0d6fd53dd0396 2505343710a74a61bea5fcb849a4b61b - - default default] Lock "81f60881-4334-4ede-a10d-454a7e8a4154" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:26 compute-0 nova_compute[189279]: 2025-12-10 20:14:26.999 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.236 189283 DEBUG nova.compute.manager [req-b6ec65fb-2b09-4c3f-92b8-1d716f8c2e73 req-d7351bf0-eec4-40af-8263-f39f9d24baf4 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Received event network-vif-deleted-42ea5f6d-dd00-4169-8385-3b8709530411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.237 189283 INFO nova.compute.manager [req-b6ec65fb-2b09-4c3f-92b8-1d716f8c2e73 req-d7351bf0-eec4-40af-8263-f39f9d24baf4 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Neutron deleted interface 42ea5f6d-dd00-4169-8385-3b8709530411; detaching it from the instance and deleting it from the info cache
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.237 189283 DEBUG nova.network.neutron [req-b6ec65fb-2b09-4c3f-92b8-1d716f8c2e73 req-d7351bf0-eec4-40af-8263-f39f9d24baf4 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.245 189283 DEBUG nova.compute.manager [req-b6ec65fb-2b09-4c3f-92b8-1d716f8c2e73 req-d7351bf0-eec4-40af-8263-f39f9d24baf4 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Detach interface failed, port_id=42ea5f6d-dd00-4169-8385-3b8709530411, reason: Instance 81f60881-4334-4ede-a10d-454a7e8a4154 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.793 189283 DEBUG nova.network.neutron [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updating instance_info_cache with network_info: [{"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.823 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Releasing lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.824 189283 DEBUG nova.compute.manager [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Instance network_info: |[{"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.824 189283 DEBUG oslo_concurrency.lockutils [req-e037e7a1-b665-459e-8b45-0496acaa022c req-54f0905d-4d5d-46aa-a916-71b8f38fbe20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.824 189283 DEBUG nova.network.neutron [req-e037e7a1-b665-459e-8b45-0496acaa022c req-54f0905d-4d5d-46aa-a916-71b8f38fbe20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Refreshing network info cache for port 809bdeda-a71c-4370-a746-873e31aa580c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.827 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Start _get_guest_xml network_info=[{"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:14:12Z,direct_url=<?>,disk_format='qcow2',id=ab2dea70-7375-4e2d-beda-90f19a5ec15e,min_disk=0,min_ram=0,name='tempest-scenario-img--877921737',owner='e773c65970c34c9db154c6fea65d9fa4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:14:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': 'ab2dea70-7375-4e2d-beda-90f19a5ec15e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.836 189283 WARNING nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.848 189283 DEBUG nova.virt.libvirt.host [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.848 189283 DEBUG nova.virt.libvirt.host [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.853 189283 DEBUG nova.virt.libvirt.host [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.854 189283 DEBUG nova.virt.libvirt.host [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.854 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.854 189283 DEBUG nova.virt.hardware [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T20:11:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:14:12Z,direct_url=<?>,disk_format='qcow2',id=ab2dea70-7375-4e2d-beda-90f19a5ec15e,min_disk=0,min_ram=0,name='tempest-scenario-img--877921737',owner='e773c65970c34c9db154c6fea65d9fa4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:14:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.855 189283 DEBUG nova.virt.hardware [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.855 189283 DEBUG nova.virt.hardware [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.855 189283 DEBUG nova.virt.hardware [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.856 189283 DEBUG nova.virt.hardware [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.856 189283 DEBUG nova.virt.hardware [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.856 189283 DEBUG nova.virt.hardware [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.857 189283 DEBUG nova.virt.hardware [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.857 189283 DEBUG nova.virt.hardware [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.857 189283 DEBUG nova.virt.hardware [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.857 189283 DEBUG nova.virt.hardware [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.861 189283 DEBUG nova.virt.libvirt.vif [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r',id=14,image_ref='ab2dea70-7375-4e2d-beda-90f19a5ec15e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e773c65970c34c9db154c6fea65d9fa4',ramdisk_id='',reservation_id='r-fd9mp2qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ab2dea70-7375-4e2d-beda-90f19a5ec15e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1355872434',owner_user_name='tempest-PrometheusGabbiTest-1355872434-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:14:22Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='639468767e8f48a1bd0e3dac90a0ec47',uuid=ca7daa1b-94a2-4e08-902b-73be0ab83974,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.862 189283 DEBUG nova.network.os_vif_util [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Converting VIF {"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.862 189283 DEBUG nova.network.os_vif_util [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:fb:da,bridge_name='br-int',has_traffic_filtering=True,id=809bdeda-a71c-4370-a746-873e31aa580c,network=Network(5861e367-6dd6-4128-97c5-6a0449548387),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap809bdeda-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.864 189283 DEBUG nova.objects.instance [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lazy-loading 'pci_devices' on Instance uuid ca7daa1b-94a2-4e08-902b-73be0ab83974 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.886 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] End _get_guest_xml xml=<domain type="kvm">
Dec 10 20:14:27 compute-0 nova_compute[189279]:   <uuid>ca7daa1b-94a2-4e08-902b-73be0ab83974</uuid>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   <name>instance-0000000e</name>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   <memory>131072</memory>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   <metadata>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <nova:name>te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r</nova:name>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 20:14:27</nova:creationTime>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <nova:flavor name="m1.nano">
Dec 10 20:14:27 compute-0 nova_compute[189279]:         <nova:memory>128</nova:memory>
Dec 10 20:14:27 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 20:14:27 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 20:14:27 compute-0 nova_compute[189279]:         <nova:ephemeral>0</nova:ephemeral>
Dec 10 20:14:27 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 20:14:27 compute-0 nova_compute[189279]:         <nova:user uuid="639468767e8f48a1bd0e3dac90a0ec47">tempest-PrometheusGabbiTest-1355872434-project-member</nova:user>
Dec 10 20:14:27 compute-0 nova_compute[189279]:         <nova:project uuid="e773c65970c34c9db154c6fea65d9fa4">tempest-PrometheusGabbiTest-1355872434</nova:project>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="ab2dea70-7375-4e2d-beda-90f19a5ec15e"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 20:14:27 compute-0 nova_compute[189279]:         <nova:port uuid="809bdeda-a71c-4370-a746-873e31aa580c">
Dec 10 20:14:27 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="10.100.1.68" ipVersion="4"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   </metadata>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <system>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <entry name="serial">ca7daa1b-94a2-4e08-902b-73be0ab83974</entry>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <entry name="uuid">ca7daa1b-94a2-4e08-902b-73be0ab83974</entry>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     </system>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   <os>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   </os>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   <features>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <apic/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   </features>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   </clock>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   </cpu>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   <devices>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.config"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:9b:fb:da"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <target dev="tap809bdeda-a7"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     </interface>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/console.log" append="off"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     </serial>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <video>
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     </video>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     </rng>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 20:14:27 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 20:14:27 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 20:14:27 compute-0 nova_compute[189279]:   </devices>
Dec 10 20:14:27 compute-0 nova_compute[189279]: </domain>
Dec 10 20:14:27 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.887 189283 DEBUG nova.compute.manager [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Preparing to wait for external event network-vif-plugged-809bdeda-a71c-4370-a746-873e31aa580c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.887 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.888 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.888 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.889 189283 DEBUG nova.virt.libvirt.vif [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r',id=14,image_ref='ab2dea70-7375-4e2d-beda-90f19a5ec15e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e773c65970c34c9db154c6fea65d9fa4',ramdisk_id='',reservation_id='r-fd9mp2qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ab2dea70-7375-4e2d-beda-90f19a5ec15e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1355872434',owner_user_name='tempest-PrometheusGabbiTest-1355872434-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:14:22Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='639468767e8f48a1bd0e3dac90a0ec47',uuid=ca7daa1b-94a2-4e08-902b-73be0ab83974,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.889 189283 DEBUG nova.network.os_vif_util [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Converting VIF {"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.890 189283 DEBUG nova.network.os_vif_util [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:fb:da,bridge_name='br-int',has_traffic_filtering=True,id=809bdeda-a71c-4370-a746-873e31aa580c,network=Network(5861e367-6dd6-4128-97c5-6a0449548387),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap809bdeda-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.890 189283 DEBUG os_vif [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:fb:da,bridge_name='br-int',has_traffic_filtering=True,id=809bdeda-a71c-4370-a746-873e31aa580c,network=Network(5861e367-6dd6-4128-97c5-6a0449548387),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap809bdeda-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.891 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.891 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.891 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.895 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.895 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap809bdeda-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.895 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap809bdeda-a7, col_values=(('external_ids', {'iface-id': '809bdeda-a71c-4370-a746-873e31aa580c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:fb:da', 'vm-uuid': 'ca7daa1b-94a2-4e08-902b-73be0ab83974'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.897 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.899 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:14:27 compute-0 NetworkManager[56238]: <info>  [1765397667.9006] manager: (tap809bdeda-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.909 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.911 189283 INFO os_vif [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:fb:da,bridge_name='br-int',has_traffic_filtering=True,id=809bdeda-a71c-4370-a746-873e31aa580c,network=Network(5861e367-6dd6-4128-97c5-6a0449548387),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap809bdeda-a7')
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.982 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.983 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.983 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] No VIF found with MAC fa:16:3e:9b:fb:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 20:14:27 compute-0 nova_compute[189279]: 2025-12-10 20:14:27.984 189283 INFO nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Using config drive
Dec 10 20:14:28 compute-0 nova_compute[189279]: 2025-12-10 20:14:28.027 189283 DEBUG nova.network.neutron [req-c2f08ced-a0fa-43ae-9f24-84b9467647c5 req-789da192-226e-4073-8df6-b2711b515b07 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Updated VIF entry in instance network info cache for port 88679bfc-126b-4704-b224-65b502faa33c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:14:28 compute-0 nova_compute[189279]: 2025-12-10 20:14:28.028 189283 DEBUG nova.network.neutron [req-c2f08ced-a0fa-43ae-9f24-84b9467647c5 req-789da192-226e-4073-8df6-b2711b515b07 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Updating instance_info_cache with network_info: [{"id": "88679bfc-126b-4704-b224-65b502faa33c", "address": "fa:16:3e:60:f8:d3", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88679bfc-12", "ovs_interfaceid": "88679bfc-126b-4704-b224-65b502faa33c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:14:28 compute-0 nova_compute[189279]: 2025-12-10 20:14:28.049 189283 DEBUG oslo_concurrency.lockutils [req-c2f08ced-a0fa-43ae-9f24-84b9467647c5 req-789da192-226e-4073-8df6-b2711b515b07 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.030 189283 INFO nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Creating config drive at /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.config
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.042 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp594fz2u2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.191 189283 DEBUG oslo_concurrency.processutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp594fz2u2" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:14:29 compute-0 kernel: tap809bdeda-a7: entered promiscuous mode
Dec 10 20:14:29 compute-0 ovn_controller[97701]: 2025-12-10T20:14:29Z|00162|binding|INFO|Claiming lport 809bdeda-a71c-4370-a746-873e31aa580c for this chassis.
Dec 10 20:14:29 compute-0 ovn_controller[97701]: 2025-12-10T20:14:29Z|00163|binding|INFO|809bdeda-a71c-4370-a746-873e31aa580c: Claiming fa:16:3e:9b:fb:da 10.100.1.68
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.300 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.311 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:fb:da 10.100.1.68'], port_security=['fa:16:3e:9b:fb:da 10.100.1.68'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.68/16', 'neutron:device_id': 'ca7daa1b-94a2-4e08-902b-73be0ab83974', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5861e367-6dd6-4128-97c5-6a0449548387', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e773c65970c34c9db154c6fea65d9fa4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '423352dd-9d4c-474d-a8f0-1199c6062876', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=742d4e89-613f-49d1-83dc-36d4a9402367, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=809bdeda-a71c-4370-a746-873e31aa580c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.313 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 809bdeda-a71c-4370-a746-873e31aa580c in datapath 5861e367-6dd6-4128-97c5-6a0449548387 bound to our chassis
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.318 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5861e367-6dd6-4128-97c5-6a0449548387
Dec 10 20:14:29 compute-0 NetworkManager[56238]: <info>  [1765397669.3278] manager: (tap809bdeda-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.346 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[6476be84-f331-471c-9a01-2cafa4771822]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.347 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5861e367-61 in ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.353 239384 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5861e367-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.353 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[f3778e79-31d0-456e-a7fd-f217f2eb75e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.360 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf3124e-940b-43c6-bc25-1d140485c9cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.364 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:29 compute-0 ovn_controller[97701]: 2025-12-10T20:14:29Z|00164|binding|INFO|Setting lport 809bdeda-a71c-4370-a746-873e31aa580c ovn-installed in OVS
Dec 10 20:14:29 compute-0 ovn_controller[97701]: 2025-12-10T20:14:29Z|00165|binding|INFO|Setting lport 809bdeda-a71c-4370-a746-873e31aa580c up in Southbound
Dec 10 20:14:29 compute-0 systemd-udevd[251075]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.376 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:29 compute-0 systemd-machined[155642]: New machine qemu-15-instance-0000000e.
Dec 10 20:14:29 compute-0 NetworkManager[56238]: <info>  [1765397669.3884] device (tap809bdeda-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.388 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[328aa8c0-8bf1-42f1-be03-ad11a06c1f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 NetworkManager[56238]: <info>  [1765397669.3933] device (tap809bdeda-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 20:14:29 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.405 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[89a2601f-a64b-4b24-9346-987fa06e795a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.463 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[2641579a-1a1e-42fc-af12-9252b01de3b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.471 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[b95f1680-9447-4e54-8e7f-6683249705ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 NetworkManager[56238]: <info>  [1765397669.4730] manager: (tap5861e367-60): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Dec 10 20:14:29 compute-0 systemd-udevd[251077]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.518 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[74687cbe-e75c-472f-a945-90830da69db8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.522 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[5c50dd95-25d1-49cd-a542-71156287cc3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 NetworkManager[56238]: <info>  [1765397669.5562] device (tap5861e367-60): carrier: link connected
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.569 189283 DEBUG nova.compute.manager [req-10e9a7d6-9afc-47b7-ace7-3d6ae7cc28a4 req-37344b8e-bc90-4d4a-804c-026af3d6b6a7 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Received event network-vif-plugged-809bdeda-a71c-4370-a746-873e31aa580c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.569 189283 DEBUG oslo_concurrency.lockutils [req-10e9a7d6-9afc-47b7-ace7-3d6ae7cc28a4 req-37344b8e-bc90-4d4a-804c-026af3d6b6a7 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.570 189283 DEBUG oslo_concurrency.lockutils [req-10e9a7d6-9afc-47b7-ace7-3d6ae7cc28a4 req-37344b8e-bc90-4d4a-804c-026af3d6b6a7 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.571 189283 DEBUG oslo_concurrency.lockutils [req-10e9a7d6-9afc-47b7-ace7-3d6ae7cc28a4 req-37344b8e-bc90-4d4a-804c-026af3d6b6a7 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.572 189283 DEBUG nova.compute.manager [req-10e9a7d6-9afc-47b7-ace7-3d6ae7cc28a4 req-37344b8e-bc90-4d4a-804c-026af3d6b6a7 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Processing event network-vif-plugged-809bdeda-a71c-4370-a746-873e31aa580c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.573 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[06e81a97-490f-4f9e-ab7a-f15ea98a7689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.596 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[794ff5b2-002e-4b11-bedb-88463d3e5ef8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5861e367-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:88:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499821, 'reachable_time': 34183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251106, 'error': None, 'target': 'ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.627 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e787b567-c1cb-4cc3-bd98-6e4c87315d1f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febc:881c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499821, 'tstamp': 499821}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251107, 'error': None, 'target': 'ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.651 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e84541bb-f0c2-4c84-b27a-937f2efade6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5861e367-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:88:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499821, 'reachable_time': 34183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251108, 'error': None, 'target': 'ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.692 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e0636e28-4032-422e-bee4-36a31cbe1b03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 podman[203484]: time="2025-12-10T20:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:14:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30756 "" "Go-http-client/1.1"
Dec 10 20:14:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5264 "" "Go-http-client/1.1"
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.800 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[f4bb871e-656c-495c-aa3e-b2e01e87a167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.802 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5861e367-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.803 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.803 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5861e367-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:29 compute-0 NetworkManager[56238]: <info>  [1765397669.8068] manager: (tap5861e367-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Dec 10 20:14:29 compute-0 kernel: tap5861e367-60: entered promiscuous mode
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.811 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5861e367-60, col_values=(('external_ids', {'iface-id': 'eedd7beb-1e55-4b8d-a932-7d0592d2e98a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:14:29 compute-0 ovn_controller[97701]: 2025-12-10T20:14:29Z|00166|binding|INFO|Releasing lport eedd7beb-1e55-4b8d-a932-7d0592d2e98a from this chassis (sb_readonly=0)
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.813 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.835 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.837 106564 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5861e367-6dd6-4128-97c5-6a0449548387.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5861e367-6dd6-4128-97c5-6a0449548387.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.838 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c36782de-03ac-4b4b-b37a-e048b9aba6e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.839 106564 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: global
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     log         /dev/log local0 debug
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     log-tag     haproxy-metadata-proxy-5861e367-6dd6-4128-97c5-6a0449548387
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     user        root
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     group       root
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     maxconn     1024
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     pidfile     /var/lib/neutron/external/pids/5861e367-6dd6-4128-97c5-6a0449548387.pid.haproxy
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     daemon
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: defaults
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     log global
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     mode http
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     option httplog
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     option dontlognull
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     option http-server-close
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     option forwardfor
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     retries                 3
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     timeout http-request    30s
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     timeout connect         30s
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     timeout client          32s
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     timeout server          32s
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     timeout http-keep-alive 30s
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: listen listener
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     bind 169.254.169.254:80
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     server metadata /var/lib/neutron/metadata_proxy
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:     http-request add-header X-OVN-Network-ID 5861e367-6dd6-4128-97c5-6a0449548387
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 10 20:14:29 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:29.840 106564 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387', 'env', 'PROCESS_TAG=haproxy-5861e367-6dd6-4128-97c5-6a0449548387', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5861e367-6dd6-4128-97c5-6a0449548387.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.915 189283 DEBUG nova.compute.manager [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.916 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397669.9150307, ca7daa1b-94a2-4e08-902b-73be0ab83974 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.917 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] VM Started (Lifecycle Event)
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.925 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.930 189283 INFO nova.virt.libvirt.driver [-] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Instance spawned successfully.
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.931 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.951 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.964 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.968 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.969 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.970 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.971 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.971 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:14:29 compute-0 nova_compute[189279]: 2025-12-10 20:14:29.972 189283 DEBUG nova.virt.libvirt.driver [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.009 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.010 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397669.915939, ca7daa1b-94a2-4e08-902b-73be0ab83974 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.010 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] VM Paused (Lifecycle Event)
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.042 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.048 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397669.9236739, ca7daa1b-94a2-4e08-902b-73be0ab83974 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.048 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] VM Resumed (Lifecycle Event)
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.060 189283 INFO nova.compute.manager [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Took 7.92 seconds to spawn the instance on the hypervisor.
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.060 189283 DEBUG nova.compute.manager [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.080 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.085 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.132 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.136 189283 DEBUG nova.network.neutron [req-e037e7a1-b665-459e-8b45-0496acaa022c req-54f0905d-4d5d-46aa-a916-71b8f38fbe20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updated VIF entry in instance network info cache for port 809bdeda-a71c-4370-a746-873e31aa580c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.137 189283 DEBUG nova.network.neutron [req-e037e7a1-b665-459e-8b45-0496acaa022c req-54f0905d-4d5d-46aa-a916-71b8f38fbe20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updating instance_info_cache with network_info: [{"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.157 189283 DEBUG oslo_concurrency.lockutils [req-e037e7a1-b665-459e-8b45-0496acaa022c req-54f0905d-4d5d-46aa-a916-71b8f38fbe20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.159 189283 INFO nova.compute.manager [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Took 8.61 seconds to build instance.
Dec 10 20:14:30 compute-0 nova_compute[189279]: 2025-12-10 20:14:30.190 189283 DEBUG oslo_concurrency.lockutils [None req-0bd249e7-c360-4acf-ad0c-c684ebcc27bf 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:30 compute-0 podman[251146]: 2025-12-10 20:14:30.32808607 +0000 UTC m=+0.094753522 container create 044e6b6451c9351fa67996bfc04bdb41690998f783eaccb280189ffde47ae5f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 10 20:14:30 compute-0 podman[251146]: 2025-12-10 20:14:30.275787896 +0000 UTC m=+0.042455348 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 10 20:14:30 compute-0 systemd[1]: Started libpod-conmon-044e6b6451c9351fa67996bfc04bdb41690998f783eaccb280189ffde47ae5f1.scope.
Dec 10 20:14:30 compute-0 systemd[1]: Started libcrun container.
Dec 10 20:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19d9126430d3f3dc75a85cebc7f2afa5ecc017e265d7746a85832384e5896c3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 10 20:14:30 compute-0 podman[251146]: 2025-12-10 20:14:30.434360372 +0000 UTC m=+0.201027834 container init 044e6b6451c9351fa67996bfc04bdb41690998f783eaccb280189ffde47ae5f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 10 20:14:30 compute-0 podman[251146]: 2025-12-10 20:14:30.4468937 +0000 UTC m=+0.213561132 container start 044e6b6451c9351fa67996bfc04bdb41690998f783eaccb280189ffde47ae5f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 10 20:14:30 compute-0 neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387[251159]: [NOTICE]   (251179) : New worker (251182) forked
Dec 10 20:14:30 compute-0 neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387[251159]: [NOTICE]   (251179) : Loading success.
Dec 10 20:14:30 compute-0 podman[251156]: 2025-12-10 20:14:30.478404613 +0000 UTC m=+0.099852100 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:14:31 compute-0 openstack_network_exporter[205632]: ERROR   20:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:14:31 compute-0 openstack_network_exporter[205632]: ERROR   20:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:14:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:14:31 compute-0 openstack_network_exporter[205632]: ERROR   20:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:14:31 compute-0 openstack_network_exporter[205632]: ERROR   20:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:14:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:14:31 compute-0 openstack_network_exporter[205632]: ERROR   20:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:14:31 compute-0 nova_compute[189279]: 2025-12-10 20:14:31.681 189283 DEBUG nova.compute.manager [req-53f3586c-6059-4626-954b-cc0e64e482bd req-320c7bc5-aa24-48a3-842a-e6384a341732 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Received event network-vif-plugged-809bdeda-a71c-4370-a746-873e31aa580c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:14:31 compute-0 nova_compute[189279]: 2025-12-10 20:14:31.682 189283 DEBUG oslo_concurrency.lockutils [req-53f3586c-6059-4626-954b-cc0e64e482bd req-320c7bc5-aa24-48a3-842a-e6384a341732 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:14:31 compute-0 nova_compute[189279]: 2025-12-10 20:14:31.682 189283 DEBUG oslo_concurrency.lockutils [req-53f3586c-6059-4626-954b-cc0e64e482bd req-320c7bc5-aa24-48a3-842a-e6384a341732 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:14:31 compute-0 nova_compute[189279]: 2025-12-10 20:14:31.682 189283 DEBUG oslo_concurrency.lockutils [req-53f3586c-6059-4626-954b-cc0e64e482bd req-320c7bc5-aa24-48a3-842a-e6384a341732 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:14:31 compute-0 nova_compute[189279]: 2025-12-10 20:14:31.683 189283 DEBUG nova.compute.manager [req-53f3586c-6059-4626-954b-cc0e64e482bd req-320c7bc5-aa24-48a3-842a-e6384a341732 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] No waiting events found dispatching network-vif-plugged-809bdeda-a71c-4370-a746-873e31aa580c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:14:31 compute-0 nova_compute[189279]: 2025-12-10 20:14:31.683 189283 WARNING nova.compute.manager [req-53f3586c-6059-4626-954b-cc0e64e482bd req-320c7bc5-aa24-48a3-842a-e6384a341732 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Received unexpected event network-vif-plugged-809bdeda-a71c-4370-a746-873e31aa580c for instance with vm_state active and task_state None.
Dec 10 20:14:32 compute-0 nova_compute[189279]: 2025-12-10 20:14:32.003 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:32 compute-0 nova_compute[189279]: 2025-12-10 20:14:32.898 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:34 compute-0 ovn_controller[97701]: 2025-12-10T20:14:34Z|00167|binding|INFO|Releasing lport c6649cf0-8544-4fa3-a1cf-44dddb6fbbdc from this chassis (sb_readonly=0)
Dec 10 20:14:34 compute-0 ovn_controller[97701]: 2025-12-10T20:14:34Z|00168|binding|INFO|Releasing lport eedd7beb-1e55-4b8d-a932-7d0592d2e98a from this chassis (sb_readonly=0)
Dec 10 20:14:34 compute-0 ovn_controller[97701]: 2025-12-10T20:14:34Z|00169|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:14:34 compute-0 nova_compute[189279]: 2025-12-10 20:14:34.177 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:37 compute-0 nova_compute[189279]: 2025-12-10 20:14:37.006 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:37 compute-0 nova_compute[189279]: 2025-12-10 20:14:37.902 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:38 compute-0 nova_compute[189279]: 2025-12-10 20:14:38.690 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:40 compute-0 podman[251193]: 2025-12-10 20:14:40.138285421 +0000 UTC m=+0.112187423 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 10 20:14:40 compute-0 podman[251192]: 2025-12-10 20:14:40.144778347 +0000 UTC m=+0.117391084 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:14:40 compute-0 nova_compute[189279]: 2025-12-10 20:14:40.626 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765397665.6249528, 81f60881-4334-4ede-a10d-454a7e8a4154 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:14:40 compute-0 nova_compute[189279]: 2025-12-10 20:14:40.627 189283 INFO nova.compute.manager [-] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] VM Stopped (Lifecycle Event)
Dec 10 20:14:40 compute-0 nova_compute[189279]: 2025-12-10 20:14:40.653 189283 DEBUG nova.compute.manager [None req-fc4b4cc9-e5a1-4d14-b961-8a851ae4a0b0 - - - - - -] [instance: 81f60881-4334-4ede-a10d-454a7e8a4154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:14:41 compute-0 nova_compute[189279]: 2025-12-10 20:14:41.802 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:42 compute-0 nova_compute[189279]: 2025-12-10 20:14:42.009 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:42 compute-0 nova_compute[189279]: 2025-12-10 20:14:42.905 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:46 compute-0 ovn_controller[97701]: 2025-12-10T20:14:46Z|00170|binding|INFO|Releasing lport c6649cf0-8544-4fa3-a1cf-44dddb6fbbdc from this chassis (sb_readonly=0)
Dec 10 20:14:46 compute-0 ovn_controller[97701]: 2025-12-10T20:14:46Z|00171|binding|INFO|Releasing lport eedd7beb-1e55-4b8d-a932-7d0592d2e98a from this chassis (sb_readonly=0)
Dec 10 20:14:46 compute-0 ovn_controller[97701]: 2025-12-10T20:14:46Z|00172|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:14:46 compute-0 nova_compute[189279]: 2025-12-10 20:14:46.889 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:47 compute-0 nova_compute[189279]: 2025-12-10 20:14:47.018 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:47 compute-0 nova_compute[189279]: 2025-12-10 20:14:47.908 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:48 compute-0 podman[251235]: 2025-12-10 20:14:48.131277986 +0000 UTC m=+0.105590436 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:14:48 compute-0 podman[251236]: 2025-12-10 20:14:48.166280652 +0000 UTC m=+0.125234427 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 10 20:14:48 compute-0 podman[251237]: 2025-12-10 20:14:48.173099466 +0000 UTC m=+0.125132163 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, name=ubi9, io.openshift.expose-services=, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec 10 20:14:48 compute-0 nova_compute[189279]: 2025-12-10 20:14:48.966 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:52 compute-0 nova_compute[189279]: 2025-12-10 20:14:52.015 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:52 compute-0 nova_compute[189279]: 2025-12-10 20:14:52.911 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:53 compute-0 ovn_controller[97701]: 2025-12-10T20:14:53Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:f8:d3 10.100.0.3
Dec 10 20:14:53 compute-0 ovn_controller[97701]: 2025-12-10T20:14:53Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:f8:d3 10.100.0.3
Dec 10 20:14:54 compute-0 podman[251318]: 2025-12-10 20:14:54.12052614 +0000 UTC m=+0.094637168 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 20:14:54 compute-0 nova_compute[189279]: 2025-12-10 20:14:54.123 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:54 compute-0 podman[251317]: 2025-12-10 20:14:54.139145394 +0000 UTC m=+0.113143469 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec 10 20:14:54 compute-0 ovn_controller[97701]: 2025-12-10T20:14:54Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:b0:0b 10.100.0.8
Dec 10 20:14:56 compute-0 podman[251356]: 2025-12-10 20:14:56.164358313 +0000 UTC m=+0.141465214 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:14:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:56.909 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:14:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:14:56.910 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 20:14:56 compute-0 nova_compute[189279]: 2025-12-10 20:14:56.913 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:57 compute-0 nova_compute[189279]: 2025-12-10 20:14:57.021 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:57 compute-0 nova_compute[189279]: 2025-12-10 20:14:57.914 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:58 compute-0 nova_compute[189279]: 2025-12-10 20:14:58.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:14:58 compute-0 nova_compute[189279]: 2025-12-10 20:14:58.523 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:14:59 compute-0 podman[203484]: time="2025-12-10T20:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:14:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31989 "" "Go-http-client/1.1"
Dec 10 20:14:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5737 "" "Go-http-client/1.1"
Dec 10 20:14:59 compute-0 nova_compute[189279]: 2025-12-10 20:14:59.944 189283 INFO nova.compute.manager [None req-fe02e210-f877-4e85-a2f3-c38735979bc8 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Get console output
Dec 10 20:14:59 compute-0 nova_compute[189279]: 2025-12-10 20:14:59.952 239292 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.366 189283 DEBUG oslo_concurrency.lockutils [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.367 189283 DEBUG oslo_concurrency.lockutils [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.370 189283 DEBUG oslo_concurrency.lockutils [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.372 189283 DEBUG oslo_concurrency.lockutils [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.373 189283 DEBUG oslo_concurrency.lockutils [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.374 189283 INFO nova.compute.manager [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Terminating instance
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.376 189283 DEBUG nova.compute.manager [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:15:00 compute-0 kernel: tap88679bfc-12 (unregistering): left promiscuous mode
Dec 10 20:15:00 compute-0 NetworkManager[56238]: <info>  [1765397700.4336] device (tap88679bfc-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:15:00 compute-0 ovn_controller[97701]: 2025-12-10T20:15:00Z|00173|binding|INFO|Releasing lport 88679bfc-126b-4704-b224-65b502faa33c from this chassis (sb_readonly=0)
Dec 10 20:15:00 compute-0 ovn_controller[97701]: 2025-12-10T20:15:00Z|00174|binding|INFO|Setting lport 88679bfc-126b-4704-b224-65b502faa33c down in Southbound
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.449 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:00 compute-0 ovn_controller[97701]: 2025-12-10T20:15:00Z|00175|binding|INFO|Removing iface tap88679bfc-12 ovn-installed in OVS
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.459 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:00.469 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:f8:d3 10.100.0.3'], port_security=['fa:16:3e:60:f8:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'a6e19ece-bf39-4c33-bf2a-857b75ae2ca1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4388b363-773a-4716-8c7d-00d02392bfdb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a51cea6d1cb40c383b87a400100e902', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7599f2eb-72eb-4309-86ab-70d46a94e479', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e9ca3af-f428-458c-a5cc-cfb31b816028, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=88679bfc-126b-4704-b224-65b502faa33c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.475 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:00.476 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 88679bfc-126b-4704-b224-65b502faa33c in datapath 4388b363-773a-4716-8c7d-00d02392bfdb unbound from our chassis
Dec 10 20:15:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:00.478 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4388b363-773a-4716-8c7d-00d02392bfdb
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.491 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:00 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec 10 20:15:00 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 35.339s CPU time.
Dec 10 20:15:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:00.512 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[76438a92-1879-4b53-a37d-97a33322139f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:00 compute-0 systemd-machined[155642]: Machine qemu-13-instance-0000000d terminated.
Dec 10 20:15:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:00.555 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[cb185572-d6dd-4ca3-995d-fd8be18dbc4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:00.561 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b19b9e-d073-4d40-812c-d85699d8ae7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:00.606 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc7180c-e79d-4050-a1be-5dd2f0127d29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:00 compute-0 podman[251385]: 2025-12-10 20:15:00.617293683 +0000 UTC m=+0.099963853 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec 10 20:15:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:00.639 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[a373e3a1-8bac-4739-8b93-cff01a5378b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4388b363-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492582, 'reachable_time': 42309, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251415, 'error': None, 'target': 'ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:00.666 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9faaf7-8d81-47f8-bcc4-4280583cc05d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4388b363-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 492601, 'tstamp': 492601}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251426, 'error': None, 'target': 'ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4388b363-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 492605, 'tstamp': 492605}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251426, 'error': None, 'target': 'ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:00.670 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4388b363-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.672 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.682 189283 INFO nova.virt.libvirt.driver [-] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Instance destroyed successfully.
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.682 189283 DEBUG nova.objects.instance [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lazy-loading 'resources' on Instance uuid a6e19ece-bf39-4c33-bf2a-857b75ae2ca1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.685 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:00.685 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4388b363-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:00.685 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:15:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:00.686 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4388b363-70, col_values=(('external_ids', {'iface-id': 'c6649cf0-8544-4fa3-a1cf-44dddb6fbbdc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:00 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:00.686 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.881 189283 DEBUG nova.virt.libvirt.vif [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:14:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-533550386',display_name='tempest-TestNetworkBasicOps-server-533550386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-533550386',id=13,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGsc3usGSP/eb9jvsTnTbTDerbvN0ujKXnuP5Gvg8Yxo/cp4pbqHTtwR/dY8oDnL/K7RXoxdyL671S0DK/mzUQmJB9rBBRMBy2+GhTJk137Df4WJHorZu/n2ySj7/2KngA==',key_name='tempest-TestNetworkBasicOps-539295554',keypairs=<?>,launch_index=0,launched_at=2025-12-10T20:14:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a51cea6d1cb40c383b87a400100e902',ramdisk_id='',reservation_id='r-l0lg2y0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1301966146',owner_user_name='tempest-TestNetworkBasicOps-1301966146-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T20:14:21Z,user_data=None,user_id='598a18069aae495194ab1b43958530aa',uuid=a6e19ece-bf39-4c33-bf2a-857b75ae2ca1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88679bfc-126b-4704-b224-65b502faa33c", "address": "fa:16:3e:60:f8:d3", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88679bfc-12", "ovs_interfaceid": "88679bfc-126b-4704-b224-65b502faa33c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.881 189283 DEBUG nova.network.os_vif_util [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Converting VIF {"id": "88679bfc-126b-4704-b224-65b502faa33c", "address": "fa:16:3e:60:f8:d3", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88679bfc-12", "ovs_interfaceid": "88679bfc-126b-4704-b224-65b502faa33c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.882 189283 DEBUG nova.network.os_vif_util [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:f8:d3,bridge_name='br-int',has_traffic_filtering=True,id=88679bfc-126b-4704-b224-65b502faa33c,network=Network(4388b363-773a-4716-8c7d-00d02392bfdb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88679bfc-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.883 189283 DEBUG os_vif [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:f8:d3,bridge_name='br-int',has_traffic_filtering=True,id=88679bfc-126b-4704-b224-65b502faa33c,network=Network(4388b363-773a-4716-8c7d-00d02392bfdb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88679bfc-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.884 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.885 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88679bfc-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.889 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.894 189283 INFO os_vif [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:f8:d3,bridge_name='br-int',has_traffic_filtering=True,id=88679bfc-126b-4704-b224-65b502faa33c,network=Network(4388b363-773a-4716-8c7d-00d02392bfdb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88679bfc-12')
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.896 189283 INFO nova.virt.libvirt.driver [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Deleting instance files /var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1_del
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.898 189283 INFO nova.virt.libvirt.driver [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Deletion of /var/lib/nova/instances/a6e19ece-bf39-4c33-bf2a-857b75ae2ca1_del complete
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.978 189283 INFO nova.compute.manager [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Took 0.60 seconds to destroy the instance on the hypervisor.
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.978 189283 DEBUG oslo.service.loopingcall [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.979 189283 DEBUG nova.compute.manager [-] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:15:00 compute-0 nova_compute[189279]: 2025-12-10 20:15:00.979 189283 DEBUG nova.network.neutron [-] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:15:01 compute-0 openstack_network_exporter[205632]: ERROR   20:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:15:01 compute-0 openstack_network_exporter[205632]: ERROR   20:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:15:01 compute-0 openstack_network_exporter[205632]: ERROR   20:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:15:01 compute-0 openstack_network_exporter[205632]: ERROR   20:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:15:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:15:01 compute-0 openstack_network_exporter[205632]: ERROR   20:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:15:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:15:01 compute-0 nova_compute[189279]: 2025-12-10 20:15:01.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:02 compute-0 nova_compute[189279]: 2025-12-10 20:15:02.022 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:02 compute-0 nova_compute[189279]: 2025-12-10 20:15:02.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:02 compute-0 nova_compute[189279]: 2025-12-10 20:15:02.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:15:02 compute-0 nova_compute[189279]: 2025-12-10 20:15:02.734 189283 DEBUG nova.network.neutron [-] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:15:02 compute-0 nova_compute[189279]: 2025-12-10 20:15:02.779 189283 INFO nova.compute.manager [-] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Took 1.80 seconds to deallocate network for instance.
Dec 10 20:15:02 compute-0 nova_compute[189279]: 2025-12-10 20:15:02.844 189283 DEBUG oslo_concurrency.lockutils [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:02 compute-0 nova_compute[189279]: 2025-12-10 20:15:02.844 189283 DEBUG oslo_concurrency.lockutils [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:02 compute-0 nova_compute[189279]: 2025-12-10 20:15:02.862 189283 DEBUG nova.compute.manager [req-f862d72d-76e6-480b-aca3-f9dd6a7f77f9 req-8c7b8d7f-eaab-4344-9c67-167d38f47db6 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Received event network-vif-plugged-88679bfc-126b-4704-b224-65b502faa33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:15:02 compute-0 nova_compute[189279]: 2025-12-10 20:15:02.863 189283 DEBUG oslo_concurrency.lockutils [req-f862d72d-76e6-480b-aca3-f9dd6a7f77f9 req-8c7b8d7f-eaab-4344-9c67-167d38f47db6 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:02 compute-0 nova_compute[189279]: 2025-12-10 20:15:02.864 189283 DEBUG oslo_concurrency.lockutils [req-f862d72d-76e6-480b-aca3-f9dd6a7f77f9 req-8c7b8d7f-eaab-4344-9c67-167d38f47db6 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:02 compute-0 nova_compute[189279]: 2025-12-10 20:15:02.865 189283 DEBUG oslo_concurrency.lockutils [req-f862d72d-76e6-480b-aca3-f9dd6a7f77f9 req-8c7b8d7f-eaab-4344-9c67-167d38f47db6 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:02 compute-0 nova_compute[189279]: 2025-12-10 20:15:02.865 189283 DEBUG nova.compute.manager [req-f862d72d-76e6-480b-aca3-f9dd6a7f77f9 req-8c7b8d7f-eaab-4344-9c67-167d38f47db6 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] No waiting events found dispatching network-vif-plugged-88679bfc-126b-4704-b224-65b502faa33c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:15:02 compute-0 nova_compute[189279]: 2025-12-10 20:15:02.866 189283 WARNING nova.compute.manager [req-f862d72d-76e6-480b-aca3-f9dd6a7f77f9 req-8c7b8d7f-eaab-4344-9c67-167d38f47db6 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Received unexpected event network-vif-plugged-88679bfc-126b-4704-b224-65b502faa33c for instance with vm_state deleted and task_state None.
Dec 10 20:15:02 compute-0 nova_compute[189279]: 2025-12-10 20:15:02.942 189283 DEBUG nova.compute.manager [req-713acb35-a094-4531-9076-2ed5c322b844 req-e88765d8-cc54-4a3a-9bb2-7e1ff65fa949 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Received event network-vif-deleted-88679bfc-126b-4704-b224-65b502faa33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:15:03 compute-0 nova_compute[189279]: 2025-12-10 20:15:03.249 189283 DEBUG nova.compute.provider_tree [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:15:03 compute-0 nova_compute[189279]: 2025-12-10 20:15:03.265 189283 DEBUG nova.scheduler.client.report [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:15:03 compute-0 nova_compute[189279]: 2025-12-10 20:15:03.294 189283 DEBUG oslo_concurrency.lockutils [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:03 compute-0 nova_compute[189279]: 2025-12-10 20:15:03.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:03 compute-0 nova_compute[189279]: 2025-12-10 20:15:03.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:15:03 compute-0 nova_compute[189279]: 2025-12-10 20:15:03.618 189283 INFO nova.scheduler.client.report [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Deleted allocations for instance a6e19ece-bf39-4c33-bf2a-857b75ae2ca1
Dec 10 20:15:03 compute-0 nova_compute[189279]: 2025-12-10 20:15:03.687 189283 DEBUG oslo_concurrency.lockutils [None req-2102587e-d736-42a9-8fa5-e6c6699a8ac4 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a6e19ece-bf39-4c33-bf2a-857b75ae2ca1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:03 compute-0 nova_compute[189279]: 2025-12-10 20:15:03.779 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:15:03 compute-0 nova_compute[189279]: 2025-12-10 20:15:03.779 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:15:03 compute-0 nova_compute[189279]: 2025-12-10 20:15:03.780 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:15:04 compute-0 nova_compute[189279]: 2025-12-10 20:15:04.114 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:04 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:04.912 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:05 compute-0 ovn_controller[97701]: 2025-12-10T20:15:05Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:fb:da 10.100.1.68
Dec 10 20:15:05 compute-0 ovn_controller[97701]: 2025-12-10T20:15:05Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:fb:da 10.100.1.68
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.591 189283 DEBUG oslo_concurrency.lockutils [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "a4a66175-57ff-48da-8473-e93f72da4499" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.592 189283 DEBUG oslo_concurrency.lockutils [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a4a66175-57ff-48da-8473-e93f72da4499" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.592 189283 DEBUG oslo_concurrency.lockutils [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "a4a66175-57ff-48da-8473-e93f72da4499-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.592 189283 DEBUG oslo_concurrency.lockutils [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a4a66175-57ff-48da-8473-e93f72da4499-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.593 189283 DEBUG oslo_concurrency.lockutils [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a4a66175-57ff-48da-8473-e93f72da4499-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.594 189283 INFO nova.compute.manager [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Terminating instance
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.595 189283 DEBUG nova.compute.manager [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:15:05 compute-0 kernel: tap3ae03bc4-72 (unregistering): left promiscuous mode
Dec 10 20:15:05 compute-0 NetworkManager[56238]: <info>  [1765397705.6572] device (tap3ae03bc4-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:15:05 compute-0 ovn_controller[97701]: 2025-12-10T20:15:05Z|00176|binding|INFO|Releasing lport 3ae03bc4-7221-4da1-8e97-1a1ea168ac84 from this chassis (sb_readonly=0)
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.658 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:05 compute-0 ovn_controller[97701]: 2025-12-10T20:15:05Z|00177|binding|INFO|Setting lport 3ae03bc4-7221-4da1-8e97-1a1ea168ac84 down in Southbound
Dec 10 20:15:05 compute-0 ovn_controller[97701]: 2025-12-10T20:15:05Z|00178|binding|INFO|Removing iface tap3ae03bc4-72 ovn-installed in OVS
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.665 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:05.672 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:ab:64 10.100.0.14'], port_security=['fa:16:3e:a8:ab:64 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a4a66175-57ff-48da-8473-e93f72da4499', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4388b363-773a-4716-8c7d-00d02392bfdb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a51cea6d1cb40c383b87a400100e902', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e2eba8bb-e846-494e-a7a9-776afed9b12b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e9ca3af-f428-458c-a5cc-cfb31b816028, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=3ae03bc4-7221-4da1-8e97-1a1ea168ac84) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:15:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:05.675 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 3ae03bc4-7221-4da1-8e97-1a1ea168ac84 in datapath 4388b363-773a-4716-8c7d-00d02392bfdb unbound from our chassis
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.676 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:05.679 106564 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4388b363-773a-4716-8c7d-00d02392bfdb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 10 20:15:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:05.684 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[5ad70d75-c028-4972-ad4f-0352ba2cf4c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:05.685 106564 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb namespace which is not needed anymore
Dec 10 20:15:05 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec 10 20:15:05 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 46.139s CPU time.
Dec 10 20:15:05 compute-0 systemd-machined[155642]: Machine qemu-10-instance-0000000a terminated.
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.884 189283 INFO nova.virt.libvirt.driver [-] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Instance destroyed successfully.
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.885 189283 DEBUG nova.objects.instance [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lazy-loading 'resources' on Instance uuid a4a66175-57ff-48da-8473-e93f72da4499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.887 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:05 compute-0 neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb[249727]: [NOTICE]   (249731) : haproxy version is 2.8.14-c23fe91
Dec 10 20:15:05 compute-0 neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb[249727]: [NOTICE]   (249731) : path to executable is /usr/sbin/haproxy
Dec 10 20:15:05 compute-0 neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb[249727]: [WARNING]  (249731) : Exiting Master process...
Dec 10 20:15:05 compute-0 neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb[249727]: [ALERT]    (249731) : Current worker (249733) exited with code 143 (Terminated)
Dec 10 20:15:05 compute-0 neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb[249727]: [WARNING]  (249731) : All workers exited. Exiting... (0)
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.911 189283 DEBUG nova.virt.libvirt.vif [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:13:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430019440',display_name='tempest-TestNetworkBasicOps-server-1430019440',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430019440',id=10,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB4E+QQbCnMKR4Bqdjha+rs4A0/JyNIyai0SC4OFeCF3EnGfKMIqFc/YZBttl6lpjVQTEtQAwCW4j1L5i/kG3kkf68MHHviiDU+MYShWguHMhoAFUF8RQ+bl7fw8EmQuPQ==',key_name='tempest-TestNetworkBasicOps-103146956',keypairs=<?>,launch_index=0,launched_at=2025-12-10T20:13:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a51cea6d1cb40c383b87a400100e902',ramdisk_id='',reservation_id='r-fzq7r9os',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1301966146',owner_user_name='tempest-TestNetworkBasicOps-1301966146-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T20:13:18Z,user_data=None,user_id='598a18069aae495194ab1b43958530aa',uuid=a4a66175-57ff-48da-8473-e93f72da4499,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "address": "fa:16:3e:a8:ab:64", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae03bc4-72", "ovs_interfaceid": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.911 189283 DEBUG nova.network.os_vif_util [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Converting VIF {"id": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "address": "fa:16:3e:a8:ab:64", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae03bc4-72", "ovs_interfaceid": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.912 189283 DEBUG nova.network.os_vif_util [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:ab:64,bridge_name='br-int',has_traffic_filtering=True,id=3ae03bc4-7221-4da1-8e97-1a1ea168ac84,network=Network(4388b363-773a-4716-8c7d-00d02392bfdb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ae03bc4-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.913 189283 DEBUG os_vif [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:ab:64,bridge_name='br-int',has_traffic_filtering=True,id=3ae03bc4-7221-4da1-8e97-1a1ea168ac84,network=Network(4388b363-773a-4716-8c7d-00d02392bfdb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ae03bc4-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.915 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:05 compute-0 systemd[1]: libpod-8462988b5fb72bee9f253af738c473387f760e1262eb209f905ca15dba578cb4.scope: Deactivated successfully.
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.915 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ae03bc4-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.917 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Updating instance_info_cache with network_info: [{"id": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "address": "fa:16:3e:a8:ab:64", "network": {"id": "4388b363-773a-4716-8c7d-00d02392bfdb", "bridge": "br-int", "label": "tempest-network-smoke--2109787748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a51cea6d1cb40c383b87a400100e902", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ae03bc4-72", "ovs_interfaceid": "3ae03bc4-7221-4da1-8e97-1a1ea168ac84", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.918 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.920 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:15:05 compute-0 podman[251476]: 2025-12-10 20:15:05.920523654 +0000 UTC m=+0.066689583 container died 8462988b5fb72bee9f253af738c473387f760e1262eb209f905ca15dba578cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.922 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.924 189283 INFO os_vif [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:ab:64,bridge_name='br-int',has_traffic_filtering=True,id=3ae03bc4-7221-4da1-8e97-1a1ea168ac84,network=Network(4388b363-773a-4716-8c7d-00d02392bfdb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ae03bc4-72')
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.925 189283 INFO nova.virt.libvirt.driver [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Deleting instance files /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499_del
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.926 189283 INFO nova.virt.libvirt.driver [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Deletion of /var/lib/nova/instances/a4a66175-57ff-48da-8473-e93f72da4499_del complete
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.933 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-a4a66175-57ff-48da-8473-e93f72da4499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.934 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.934 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8462988b5fb72bee9f253af738c473387f760e1262eb209f905ca15dba578cb4-userdata-shm.mount: Deactivated successfully.
Dec 10 20:15:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-78c815b724e4217cb31fd3473513970c4c111cd7d49f74e11af5324501e04a90-merged.mount: Deactivated successfully.
Dec 10 20:15:05 compute-0 podman[251476]: 2025-12-10 20:15:05.980289689 +0000 UTC m=+0.126455618 container cleanup 8462988b5fb72bee9f253af738c473387f760e1262eb209f905ca15dba578cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.983 189283 INFO nova.compute.manager [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Took 0.39 seconds to destroy the instance on the hypervisor.
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.983 189283 DEBUG oslo.service.loopingcall [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.983 189283 DEBUG nova.compute.manager [-] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:15:05 compute-0 nova_compute[189279]: 2025-12-10 20:15:05.984 189283 DEBUG nova.network.neutron [-] [instance: a4a66175-57ff-48da-8473-e93f72da4499] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:15:05 compute-0 systemd[1]: libpod-conmon-8462988b5fb72bee9f253af738c473387f760e1262eb209f905ca15dba578cb4.scope: Deactivated successfully.
Dec 10 20:15:06 compute-0 podman[251512]: 2025-12-10 20:15:06.085282957 +0000 UTC m=+0.067621938 container remove 8462988b5fb72bee9f253af738c473387f760e1262eb209f905ca15dba578cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:15:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:06.096 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[323f405f-d2c6-45ee-aaf7-2c58aed51977]: (4, ('Wed Dec 10 08:15:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb (8462988b5fb72bee9f253af738c473387f760e1262eb209f905ca15dba578cb4)\n8462988b5fb72bee9f253af738c473387f760e1262eb209f905ca15dba578cb4\nWed Dec 10 08:15:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb (8462988b5fb72bee9f253af738c473387f760e1262eb209f905ca15dba578cb4)\n8462988b5fb72bee9f253af738c473387f760e1262eb209f905ca15dba578cb4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:06.098 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[32f74796-8a6b-4950-909d-046027151584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:06.099 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4388b363-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.101 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:06 compute-0 kernel: tap4388b363-70: left promiscuous mode
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.117 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:06.124 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[26ed11db-ce71-4b66-a01c-90af727d3e65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:06.141 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d8bbdb-088e-487f-bfb1-8a7193b82d47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:06.143 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c8915306-e4ec-4df9-a8cc-b6a6d5ecc493]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:06.169 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[db4f02c8-40a5-4271-ba9c-b83b64470baf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492570, 'reachable_time': 31834, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251526, 'error': None, 'target': 'ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d4388b363\x2d773a\x2d4716\x2d8c7d\x2d00d02392bfdb.mount: Deactivated successfully.
Dec 10 20:15:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:06.173 106676 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4388b363-773a-4716-8c7d-00d02392bfdb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 10 20:15:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:06.173 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[87d46f3f-3db7-48e8-9d52-c15cb2def230]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.519 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.519 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.520 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.520 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.662 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.760 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.762 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.828 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.835 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.931 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:15:06 compute-0 nova_compute[189279]: 2025-12-10 20:15:06.932 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.013 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.028 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.182 189283 DEBUG nova.network.neutron [-] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.207 189283 INFO nova.compute.manager [-] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Took 1.22 seconds to deallocate network for instance.
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.257 189283 DEBUG oslo_concurrency.lockutils [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.258 189283 DEBUG oslo_concurrency.lockutils [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.388 189283 DEBUG nova.compute.provider_tree [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.415 189283 DEBUG nova.scheduler.client.report [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.453 189283 DEBUG oslo_concurrency.lockutils [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.485 189283 DEBUG nova.compute.manager [req-f5fac4b1-73f6-47a0-a39b-fe6a4f851d4a req-32e3a4aa-668c-4469-8f23-1e8fc704f0c0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Received event network-vif-plugged-3ae03bc4-7221-4da1-8e97-1a1ea168ac84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.486 189283 DEBUG oslo_concurrency.lockutils [req-f5fac4b1-73f6-47a0-a39b-fe6a4f851d4a req-32e3a4aa-668c-4469-8f23-1e8fc704f0c0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "a4a66175-57ff-48da-8473-e93f72da4499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.486 189283 DEBUG oslo_concurrency.lockutils [req-f5fac4b1-73f6-47a0-a39b-fe6a4f851d4a req-32e3a4aa-668c-4469-8f23-1e8fc704f0c0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "a4a66175-57ff-48da-8473-e93f72da4499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.487 189283 DEBUG oslo_concurrency.lockutils [req-f5fac4b1-73f6-47a0-a39b-fe6a4f851d4a req-32e3a4aa-668c-4469-8f23-1e8fc704f0c0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "a4a66175-57ff-48da-8473-e93f72da4499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.487 189283 DEBUG nova.compute.manager [req-f5fac4b1-73f6-47a0-a39b-fe6a4f851d4a req-32e3a4aa-668c-4469-8f23-1e8fc704f0c0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] No waiting events found dispatching network-vif-plugged-3ae03bc4-7221-4da1-8e97-1a1ea168ac84 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.488 189283 WARNING nova.compute.manager [req-f5fac4b1-73f6-47a0-a39b-fe6a4f851d4a req-32e3a4aa-668c-4469-8f23-1e8fc704f0c0 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Received unexpected event network-vif-plugged-3ae03bc4-7221-4da1-8e97-1a1ea168ac84 for instance with vm_state deleted and task_state None.
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.489 189283 INFO nova.scheduler.client.report [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Deleted allocations for instance a4a66175-57ff-48da-8473-e93f72da4499
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.555 189283 DEBUG oslo_concurrency.lockutils [None req-6cf2492d-eed9-475e-8cb3-e9526e37656e 598a18069aae495194ab1b43958530aa 8a51cea6d1cb40c383b87a400100e902 - - default default] Lock "a4a66175-57ff-48da-8473-e93f72da4499" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.587 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.588 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4983MB free_disk=72.2386245727539GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.588 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.588 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.655 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 63639261-d8d9-46e1-8b3f-55af36a85e58 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.656 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.657 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.657 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.915 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.930 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.955 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:15:07 compute-0 nova_compute[189279]: 2025-12-10 20:15:07.955 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:09 compute-0 nova_compute[189279]: 2025-12-10 20:15:09.607 189283 DEBUG nova.compute.manager [req-80ed757c-870e-4a8a-ac36-65ccb8748257 req-2fbdb90d-06e2-482a-a603-d8488aa4780f 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Received event network-vif-deleted-3ae03bc4-7221-4da1-8e97-1a1ea168ac84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:15:10 compute-0 nova_compute[189279]: 2025-12-10 20:15:10.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:10 compute-0 nova_compute[189279]: 2025-12-10 20:15:10.516 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:10 compute-0 nova_compute[189279]: 2025-12-10 20:15:10.516 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 10 20:15:10 compute-0 nova_compute[189279]: 2025-12-10 20:15:10.536 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 10 20:15:10 compute-0 ovn_controller[97701]: 2025-12-10T20:15:10Z|00179|binding|INFO|Releasing lport eedd7beb-1e55-4b8d-a932-7d0592d2e98a from this chassis (sb_readonly=0)
Dec 10 20:15:10 compute-0 ovn_controller[97701]: 2025-12-10T20:15:10Z|00180|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:15:10 compute-0 nova_compute[189279]: 2025-12-10 20:15:10.797 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:10 compute-0 nova_compute[189279]: 2025-12-10 20:15:10.921 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:11 compute-0 podman[251540]: 2025-12-10 20:15:11.145307316 +0000 UTC m=+0.102809641 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 20:15:11 compute-0 podman[251541]: 2025-12-10 20:15:11.151158134 +0000 UTC m=+0.115180494 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-type=git, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm)
Dec 10 20:15:11 compute-0 ovn_controller[97701]: 2025-12-10T20:15:11Z|00181|binding|INFO|Releasing lport eedd7beb-1e55-4b8d-a932-7d0592d2e98a from this chassis (sb_readonly=0)
Dec 10 20:15:11 compute-0 ovn_controller[97701]: 2025-12-10T20:15:11Z|00182|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:15:11 compute-0 nova_compute[189279]: 2025-12-10 20:15:11.666 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:12 compute-0 nova_compute[189279]: 2025-12-10 20:15:12.031 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:12 compute-0 nova_compute[189279]: 2025-12-10 20:15:12.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:12 compute-0 nova_compute[189279]: 2025-12-10 20:15:12.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:12 compute-0 nova_compute[189279]: 2025-12-10 20:15:12.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 10 20:15:15 compute-0 nova_compute[189279]: 2025-12-10 20:15:15.677 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765397700.6736937, a6e19ece-bf39-4c33-bf2a-857b75ae2ca1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:15:15 compute-0 nova_compute[189279]: 2025-12-10 20:15:15.679 189283 INFO nova.compute.manager [-] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] VM Stopped (Lifecycle Event)
Dec 10 20:15:15 compute-0 nova_compute[189279]: 2025-12-10 20:15:15.700 189283 DEBUG nova.compute.manager [None req-95e48d37-1668-4d39-a9e4-3680600725f6 - - - - - -] [instance: a6e19ece-bf39-4c33-bf2a-857b75ae2ca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:15:15 compute-0 nova_compute[189279]: 2025-12-10 20:15:15.927 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:17 compute-0 nova_compute[189279]: 2025-12-10 20:15:17.033 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:17 compute-0 ovn_controller[97701]: 2025-12-10T20:15:17Z|00183|binding|INFO|Releasing lport eedd7beb-1e55-4b8d-a932-7d0592d2e98a from this chassis (sb_readonly=0)
Dec 10 20:15:17 compute-0 ovn_controller[97701]: 2025-12-10T20:15:17Z|00184|binding|INFO|Releasing lport 2f9d87e3-f102-4fe2-b4d5-b25a5d31091b from this chassis (sb_readonly=0)
Dec 10 20:15:17 compute-0 nova_compute[189279]: 2025-12-10 20:15:17.923 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:19 compute-0 podman[251585]: 2025-12-10 20:15:19.122461281 +0000 UTC m=+0.094333220 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec 10 20:15:19 compute-0 podman[251584]: 2025-12-10 20:15:19.15091608 +0000 UTC m=+0.112935013 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 10 20:15:19 compute-0 podman[251586]: 2025-12-10 20:15:19.169146073 +0000 UTC m=+0.129714417 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release=1214.1726694543, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, name=ubi9, container_name=kepler, release-0.7.12=, version=9.4)
Dec 10 20:15:20 compute-0 nova_compute[189279]: 2025-12-10 20:15:20.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:20 compute-0 nova_compute[189279]: 2025-12-10 20:15:20.881 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765397705.8791976, a4a66175-57ff-48da-8473-e93f72da4499 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:15:20 compute-0 nova_compute[189279]: 2025-12-10 20:15:20.882 189283 INFO nova.compute.manager [-] [instance: a4a66175-57ff-48da-8473-e93f72da4499] VM Stopped (Lifecycle Event)
Dec 10 20:15:20 compute-0 nova_compute[189279]: 2025-12-10 20:15:20.911 189283 DEBUG nova.compute.manager [None req-bd8d34d9-469c-4efd-a86c-fdd14dbb443d - - - - - -] [instance: a4a66175-57ff-48da-8473-e93f72da4499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:15:20 compute-0 nova_compute[189279]: 2025-12-10 20:15:20.936 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:22 compute-0 nova_compute[189279]: 2025-12-10 20:15:22.038 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:23.398 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:23.399 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:23.400 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:23 compute-0 nova_compute[189279]: 2025-12-10 20:15:23.951 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:24 compute-0 nova_compute[189279]: 2025-12-10 20:15:24.344 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.067 189283 DEBUG oslo_concurrency.lockutils [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquiring lock "63639261-d8d9-46e1-8b3f-55af36a85e58" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.068 189283 DEBUG oslo_concurrency.lockutils [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.068 189283 DEBUG oslo_concurrency.lockutils [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquiring lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.068 189283 DEBUG oslo_concurrency.lockutils [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.069 189283 DEBUG oslo_concurrency.lockutils [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.070 189283 INFO nova.compute.manager [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Terminating instance
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.071 189283 DEBUG nova.compute.manager [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:15:25 compute-0 kernel: tapa0f4e290-5b (unregistering): left promiscuous mode
Dec 10 20:15:25 compute-0 NetworkManager[56238]: <info>  [1765397725.1015] device (tapa0f4e290-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.114 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:25 compute-0 ovn_controller[97701]: 2025-12-10T20:15:25Z|00185|binding|INFO|Releasing lport a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 from this chassis (sb_readonly=0)
Dec 10 20:15:25 compute-0 ovn_controller[97701]: 2025-12-10T20:15:25Z|00186|binding|INFO|Setting lport a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 down in Southbound
Dec 10 20:15:25 compute-0 ovn_controller[97701]: 2025-12-10T20:15:25Z|00187|binding|INFO|Removing iface tapa0f4e290-5b ovn-installed in OVS
Dec 10 20:15:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:25.120 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:b0:0b 10.100.0.8'], port_security=['fa:16:3e:f8:b0:0b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '63639261-d8d9-46e1-8b3f-55af36a85e58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77ecefb2-de1d-4471-80a0-8f797ab99021', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e63db29894648c7a06ef3bcb4b98768', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6e991cb1-ab23-4fa3-b4b6-83b24087f30e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b611bc6-8b69-4351-a79d-b310ec70a551, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:15:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:25.122 106564 INFO neutron.agent.ovn.metadata.agent [-] Port a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 in datapath 77ecefb2-de1d-4471-80a0-8f797ab99021 unbound from our chassis
Dec 10 20:15:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:25.126 106564 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77ecefb2-de1d-4471-80a0-8f797ab99021, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.128 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:25.128 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0fb06d-2d98-4237-a76b-1728e84665fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:25.129 106564 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021 namespace which is not needed anymore
Dec 10 20:15:25 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec 10 20:15:25 compute-0 podman[251642]: 2025-12-10 20:15:25.149304473 +0000 UTC m=+0.117640351 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:15:25 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000007.scope: Consumed 43.105s CPU time.
Dec 10 20:15:25 compute-0 podman[251641]: 2025-12-10 20:15:25.149868718 +0000 UTC m=+0.111543007 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 10 20:15:25 compute-0 systemd-machined[155642]: Machine qemu-14-instance-00000007 terminated.
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.350 189283 INFO nova.virt.libvirt.driver [-] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Instance destroyed successfully.
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.351 189283 DEBUG nova.objects.instance [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lazy-loading 'resources' on Instance uuid 63639261-d8d9-46e1-8b3f-55af36a85e58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:15:25 compute-0 neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021[250856]: [NOTICE]   (250860) : haproxy version is 2.8.14-c23fe91
Dec 10 20:15:25 compute-0 neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021[250856]: [NOTICE]   (250860) : path to executable is /usr/sbin/haproxy
Dec 10 20:15:25 compute-0 neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021[250856]: [WARNING]  (250860) : Exiting Master process...
Dec 10 20:15:25 compute-0 systemd[1]: libpod-5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b.scope: Deactivated successfully.
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.365 189283 DEBUG nova.virt.libvirt.vif [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:12:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1460650199',display_name='tempest-ServerActionsTestJSON-server-1460650199',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1460650199',id=7,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNMZ4vtRw7tBhuM4o6MjvfbKNBIl4FQd4G6qFZVFfMRp+DuluVXm6EdlnooCaRI1wwhsIBxXE3togl4a//g9wsD+ZeM3HnXvIhtkdJ8sJuoGMY7C3lFqm65C06eytVKJQw==',key_name='tempest-keypair-71097797',keypairs=<?>,launch_index=0,launched_at=2025-12-10T20:13:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2e63db29894648c7a06ef3bcb4b98768',ramdisk_id='',reservation_id='r-1tl971la',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-53104742',owner_user_name='tempest-ServerActionsTestJSON-53104742-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T20:14:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0c9cd4059c654dd4947e252e9f3acf85',uuid=63639261-d8d9-46e1-8b3f-55af36a85e58,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.366 189283 DEBUG nova.network.os_vif_util [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Converting VIF {"id": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "address": "fa:16:3e:f8:b0:0b", "network": {"id": "77ecefb2-de1d-4471-80a0-8f797ab99021", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-822085889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e63db29894648c7a06ef3bcb4b98768", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f4e290-5b", "ovs_interfaceid": "a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:15:25 compute-0 neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021[250856]: [ALERT]    (250860) : Current worker (250862) exited with code 143 (Terminated)
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.367 189283 DEBUG nova.network.os_vif_util [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:b0:0b,bridge_name='br-int',has_traffic_filtering=True,id=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1,network=Network(77ecefb2-de1d-4471-80a0-8f797ab99021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f4e290-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:15:25 compute-0 neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021[250856]: [WARNING]  (250860) : All workers exited. Exiting... (0)
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.368 189283 DEBUG os_vif [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:b0:0b,bridge_name='br-int',has_traffic_filtering=True,id=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1,network=Network(77ecefb2-de1d-4471-80a0-8f797ab99021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f4e290-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:15:25 compute-0 conmon[250856]: conmon 5c284ea8273c6f7fabc5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b.scope/container/memory.events
Dec 10 20:15:25 compute-0 podman[251704]: 2025-12-10 20:15:25.369046921 +0000 UTC m=+0.121253298 container stop 5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.371 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.372 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0f4e290-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.374 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.377 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.380 189283 INFO os_vif [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:b0:0b,bridge_name='br-int',has_traffic_filtering=True,id=a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1,network=Network(77ecefb2-de1d-4471-80a0-8f797ab99021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f4e290-5b')
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.381 189283 INFO nova.virt.libvirt.driver [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Deleting instance files /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58_del
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.382 189283 INFO nova.virt.libvirt.driver [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Deletion of /var/lib/nova/instances/63639261-d8d9-46e1-8b3f-55af36a85e58_del complete
Dec 10 20:15:25 compute-0 podman[251704]: 2025-12-10 20:15:25.396014201 +0000 UTC m=+0.148220598 container died 5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 10 20:15:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b-userdata-shm.mount: Deactivated successfully.
Dec 10 20:15:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2faa317b93ad0ba74718b303c9a230aff25eac25eb0f7626d890043d4f64e876-merged.mount: Deactivated successfully.
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.442 189283 INFO nova.compute.manager [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Took 0.37 seconds to destroy the instance on the hypervisor.
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.444 189283 DEBUG oslo.service.loopingcall [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.444 189283 DEBUG nova.compute.manager [-] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.445 189283 DEBUG nova.network.neutron [-] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:15:25 compute-0 podman[251704]: 2025-12-10 20:15:25.451715577 +0000 UTC m=+0.203921954 container cleanup 5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:15:25 compute-0 systemd[1]: libpod-conmon-5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b.scope: Deactivated successfully.
Dec 10 20:15:25 compute-0 podman[251751]: 2025-12-10 20:15:25.535172312 +0000 UTC m=+0.059065107 container remove 5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 10 20:15:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:25.544 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[0e789c92-dbe5-4098-bdb5-b0b229d8f7ae]: (4, ('Wed Dec 10 08:15:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021 (5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b)\n5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b\nWed Dec 10 08:15:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021 (5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b)\n5c284ea8273c6f7fabc5c6b0508d6eba0383ed2f3ad14c9cd83099b9091f8b4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:25.546 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b89baa-5905-4ee8-a953-f1bdf41d3515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:25.547 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77ecefb2-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.549 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:25 compute-0 kernel: tap77ecefb2-d0: left promiscuous mode
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.553 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:25.557 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[2c353095-0222-4aac-a0d9-548dbb4e7a7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.567 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:25.584 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c1cb9a4d-ba07-47ab-8b0e-9f6273fc693f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:25.586 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a05379-8bc6-4d07-aa2a-b5e536b4dc2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:25.603 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c12be26b-53c7-4fde-965a-610f47cd9fe4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498848, 'reachable_time': 34107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251765, 'error': None, 'target': 'ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:25.607 106676 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77ecefb2-de1d-4471-80a0-8f797ab99021 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 10 20:15:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d77ecefb2\x2dde1d\x2d4471\x2d80a0\x2d8f797ab99021.mount: Deactivated successfully.
Dec 10 20:15:25 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:25.607 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[47f95d0d-1f7e-413c-8064-6c652871c052]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.716 189283 DEBUG nova.compute.manager [req-2d11e9f7-366d-4266-b2e4-4542127ea242 req-0588a872-3e61-4d12-a7d1-8f32d2ca9fdb 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received event network-vif-unplugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.717 189283 DEBUG oslo_concurrency.lockutils [req-2d11e9f7-366d-4266-b2e4-4542127ea242 req-0588a872-3e61-4d12-a7d1-8f32d2ca9fdb 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.718 189283 DEBUG oslo_concurrency.lockutils [req-2d11e9f7-366d-4266-b2e4-4542127ea242 req-0588a872-3e61-4d12-a7d1-8f32d2ca9fdb 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.719 189283 DEBUG oslo_concurrency.lockutils [req-2d11e9f7-366d-4266-b2e4-4542127ea242 req-0588a872-3e61-4d12-a7d1-8f32d2ca9fdb 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.719 189283 DEBUG nova.compute.manager [req-2d11e9f7-366d-4266-b2e4-4542127ea242 req-0588a872-3e61-4d12-a7d1-8f32d2ca9fdb 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] No waiting events found dispatching network-vif-unplugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:15:25 compute-0 nova_compute[189279]: 2025-12-10 20:15:25.720 189283 DEBUG nova.compute.manager [req-2d11e9f7-366d-4266-b2e4-4542127ea242 req-0588a872-3e61-4d12-a7d1-8f32d2ca9fdb 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received event network-vif-unplugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.040 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:27 compute-0 podman[251766]: 2025-12-10 20:15:27.203156696 +0000 UTC m=+0.186049229 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.261 189283 DEBUG nova.network.neutron [-] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.279 189283 INFO nova.compute.manager [-] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Took 1.83 seconds to deallocate network for instance.
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.335 189283 DEBUG oslo_concurrency.lockutils [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.337 189283 DEBUG oslo_concurrency.lockutils [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.368 189283 DEBUG nova.compute.manager [req-1cfe520a-0052-48b9-afb9-648d3b79e356 req-85d8781b-495b-4518-bb93-cb4582eb7c87 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received event network-vif-deleted-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.408 189283 DEBUG nova.compute.provider_tree [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.422 189283 DEBUG nova.scheduler.client.report [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.445 189283 DEBUG oslo_concurrency.lockutils [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.475 189283 INFO nova.scheduler.client.report [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Deleted allocations for instance 63639261-d8d9-46e1-8b3f-55af36a85e58
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.565 189283 DEBUG oslo_concurrency.lockutils [None req-56283890-e3ef-4426-afc6-1342c3329263 0c9cd4059c654dd4947e252e9f3acf85 2e63db29894648c7a06ef3bcb4b98768 - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.814 189283 DEBUG nova.compute.manager [req-266a6403-f5cb-4cf0-821a-e5041955eec1 req-338f4ec5-ff25-4c34-a4ca-52d103302b27 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received event network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.815 189283 DEBUG oslo_concurrency.lockutils [req-266a6403-f5cb-4cf0-821a-e5041955eec1 req-338f4ec5-ff25-4c34-a4ca-52d103302b27 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.815 189283 DEBUG oslo_concurrency.lockutils [req-266a6403-f5cb-4cf0-821a-e5041955eec1 req-338f4ec5-ff25-4c34-a4ca-52d103302b27 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.816 189283 DEBUG oslo_concurrency.lockutils [req-266a6403-f5cb-4cf0-821a-e5041955eec1 req-338f4ec5-ff25-4c34-a4ca-52d103302b27 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "63639261-d8d9-46e1-8b3f-55af36a85e58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.816 189283 DEBUG nova.compute.manager [req-266a6403-f5cb-4cf0-821a-e5041955eec1 req-338f4ec5-ff25-4c34-a4ca-52d103302b27 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] No waiting events found dispatching network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:15:27 compute-0 nova_compute[189279]: 2025-12-10 20:15:27.817 189283 WARNING nova.compute.manager [req-266a6403-f5cb-4cf0-821a-e5041955eec1 req-338f4ec5-ff25-4c34-a4ca-52d103302b27 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Received unexpected event network-vif-plugged-a0f4e290-5bfb-4f64-ba5d-6dd196ad71b1 for instance with vm_state deleted and task_state None.
Dec 10 20:15:28 compute-0 nova_compute[189279]: 2025-12-10 20:15:28.020 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:29 compute-0 podman[203484]: time="2025-12-10T20:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:15:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:15:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec 10 20:15:30 compute-0 nova_compute[189279]: 2025-12-10 20:15:30.378 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:31 compute-0 podman[251789]: 2025-12-10 20:15:31.135101424 +0000 UTC m=+0.104800634 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, managed_by=edpm_ansible)
Dec 10 20:15:31 compute-0 openstack_network_exporter[205632]: ERROR   20:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:15:31 compute-0 openstack_network_exporter[205632]: ERROR   20:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:15:31 compute-0 openstack_network_exporter[205632]: ERROR   20:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:15:31 compute-0 openstack_network_exporter[205632]: ERROR   20:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:15:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:15:31 compute-0 openstack_network_exporter[205632]: ERROR   20:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:15:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:15:32 compute-0 nova_compute[189279]: 2025-12-10 20:15:32.044 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:33 compute-0 nova_compute[189279]: 2025-12-10 20:15:33.613 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:33 compute-0 ovn_controller[97701]: 2025-12-10T20:15:33Z|00188|binding|INFO|Releasing lport eedd7beb-1e55-4b8d-a932-7d0592d2e98a from this chassis (sb_readonly=0)
Dec 10 20:15:33 compute-0 nova_compute[189279]: 2025-12-10 20:15:33.808 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:34 compute-0 ovn_controller[97701]: 2025-12-10T20:15:34Z|00189|binding|INFO|Releasing lport eedd7beb-1e55-4b8d-a932-7d0592d2e98a from this chassis (sb_readonly=0)
Dec 10 20:15:34 compute-0 nova_compute[189279]: 2025-12-10 20:15:34.075 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:35 compute-0 nova_compute[189279]: 2025-12-10 20:15:35.384 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:37 compute-0 nova_compute[189279]: 2025-12-10 20:15:37.047 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:40 compute-0 nova_compute[189279]: 2025-12-10 20:15:40.327 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:40 compute-0 nova_compute[189279]: 2025-12-10 20:15:40.346 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765397725.3447304, 63639261-d8d9-46e1-8b3f-55af36a85e58 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:15:40 compute-0 nova_compute[189279]: 2025-12-10 20:15:40.348 189283 INFO nova.compute.manager [-] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] VM Stopped (Lifecycle Event)
Dec 10 20:15:40 compute-0 nova_compute[189279]: 2025-12-10 20:15:40.389 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:40 compute-0 nova_compute[189279]: 2025-12-10 20:15:40.537 189283 DEBUG nova.compute.manager [None req-ae039333-3c9b-417e-9da2-f0797b29619e - - - - - -] [instance: 63639261-d8d9-46e1-8b3f-55af36a85e58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:15:40 compute-0 nova_compute[189279]: 2025-12-10 20:15:40.545 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Triggering sync for uuid ca7daa1b-94a2-4e08-902b-73be0ab83974 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 10 20:15:40 compute-0 nova_compute[189279]: 2025-12-10 20:15:40.547 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "ca7daa1b-94a2-4e08-902b-73be0ab83974" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:40 compute-0 nova_compute[189279]: 2025-12-10 20:15:40.548 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:40 compute-0 nova_compute[189279]: 2025-12-10 20:15:40.600 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:42 compute-0 nova_compute[189279]: 2025-12-10 20:15:42.050 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:42 compute-0 podman[251812]: 2025-12-10 20:15:42.118198968 +0000 UTC m=+0.090857677 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, container_name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 10 20:15:42 compute-0 podman[251811]: 2025-12-10 20:15:42.139288988 +0000 UTC m=+0.121003091 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.182 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.183 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.183 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.184 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.192 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance ca7daa1b-94a2-4e08-902b-73be0ab83974 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 10 20:15:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:42.195 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/ca7daa1b-94a2-4e08-902b-73be0ab83974 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a6b7809c80638f6e016296d2f243706fded356213cefb5a3f70c31b120afa2c9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.861 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Wed, 10 Dec 2025 20:15:42 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-5175255d-8562-4b6c-8a7e-d791c913883e x-openstack-request-id: req-5175255d-8562-4b6c-8a7e-d791c913883e _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.862 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "ca7daa1b-94a2-4e08-902b-73be0ab83974", "name": "te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r", "status": "ACTIVE", "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "user_id": "639468767e8f48a1bd0e3dac90a0ec47", "metadata": {"metering.server_group": "bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda"}, "hostId": "1146dff38d7d135d99586b1e56a99cdcdda270d20fcfd89821e17131", "image": {"id": "ab2dea70-7375-4e2d-beda-90f19a5ec15e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ab2dea70-7375-4e2d-beda-90f19a5ec15e"}]}, "flavor": {"id": "e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4"}]}, "created": "2025-12-10T20:14:20Z", "updated": "2025-12-10T20:14:30Z", "addresses": {"": [{"version": 4, "addr": "10.100.1.68", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:9b:fb:da"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/ca7daa1b-94a2-4e08-902b-73be0ab83974"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/ca7daa1b-94a2-4e08-902b-73be0ab83974"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-10T20:14:30.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.862 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/ca7daa1b-94a2-4e08-902b-73be0ab83974 used request id req-5175255d-8562-4b6c-8a7e-d791c913883e request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.863 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ca7daa1b-94a2-4e08-902b-73be0ab83974', 'name': 'te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ab2dea70-7375-4e2d-beda-90f19a5ec15e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e773c65970c34c9db154c6fea65d9fa4', 'user_id': '639468767e8f48a1bd0e3dac90a0ec47', 'hostId': '1146dff38d7d135d99586b1e56a99cdcdda270d20fcfd89821e17131', 'status': 'active', 'metadata': {'metering.server_group': 'bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.863 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.864 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.864 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.864 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.865 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T20:15:44.864268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T20:15:44.865901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.884 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.885 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.885 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.886 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.886 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.886 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.886 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.887 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.887 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.887 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T20:15:44.886490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.887 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.887 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.887 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.887 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.887 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T20:15:44.887483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.892 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for ca7daa1b-94a2-4e08-902b-73be0ab83974 / tap809bdeda-a7 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.892 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.893 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.893 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.893 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.893 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.893 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.893 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.894 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.894 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T20:15:44.893672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.894 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.895 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.895 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.895 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T20:15:44.894877) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.896 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.896 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.896 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.896 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.896 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.896 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T20:15:44.895962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.896 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.896 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.897 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.897 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.897 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T20:15:44.897015) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.897 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.897 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.897 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.897 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.898 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T20:15:44.898142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.922 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/memory.usage volume: 43.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.923 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.923 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.923 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.923 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.923 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.923 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.924 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.924 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r>]
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.924 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.925 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.925 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.925 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.925 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-10T20:15:44.923756) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.925 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T20:15:44.925545) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.925 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.925 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.926 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.926 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.926 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.926 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.927 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.927 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.927 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.allocation volume: 30023680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.927 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.928 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.928 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.928 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.928 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.928 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.928 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.929 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T20:15:44.927103) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T20:15:44.928961) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.929 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.930 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.930 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.930 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.930 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.930 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T20:15:44.930509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.931 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.931 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.931 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.931 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.932 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.932 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.932 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.932 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.933 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T20:15:44.932152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.933 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.933 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.933 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.933 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.933 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.934 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T20:15:44.933751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.974 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.bytes volume: 29436928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.975 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.975 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.975 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.976 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.976 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.976 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.976 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/cpu volume: 72860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.977 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.976 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T20:15:44.976273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.977 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.977 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.977 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.977 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.977 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.latency volume: 542055066 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.978 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.latency volume: 53898242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.978 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T20:15:44.977789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.978 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.979 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.979 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.979 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.979 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.979 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T20:15:44.979457) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.979 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.requests volume: 1055 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.980 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.980 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.980 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.980 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.981 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.981 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.981 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T20:15:44.981078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.981 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.982 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.982 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.982 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.982 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.983 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.983 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.bytes volume: 72835072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.983 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T20:15:44.982992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.984 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.984 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.984 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.985 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.985 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.985 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.985 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.986 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.986 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T20:15:44.985181) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.986 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.986 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.987 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.987 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.latency volume: 3631515766 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.987 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.987 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T20:15:44.986987) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.988 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.988 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.988 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.988 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.988 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.requests volume: 310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.989 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T20:15:44.988780) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.989 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.989 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.990 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.990 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.990 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.990 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.990 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T20:15:44.990309) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.990 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.991 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.991 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.991 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.991 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.991 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-10T20:15:44.991505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.991 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r>]
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:44 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:15:44.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:15:45 compute-0 nova_compute[189279]: 2025-12-10 20:15:45.396 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:47 compute-0 nova_compute[189279]: 2025-12-10 20:15:47.056 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.021 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Acquiring lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.022 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.041 189283 DEBUG nova.compute.manager [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.134 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.134 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.144 189283 DEBUG nova.virt.hardware [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.144 189283 INFO nova.compute.claims [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Claim successful on node compute-0.ctlplane.example.com
Dec 10 20:15:50 compute-0 podman[251852]: 2025-12-10 20:15:50.147625595 +0000 UTC m=+0.127304152 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 10 20:15:50 compute-0 podman[251853]: 2025-12-10 20:15:50.159133976 +0000 UTC m=+0.133768067 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 10 20:15:50 compute-0 podman[251854]: 2025-12-10 20:15:50.175627292 +0000 UTC m=+0.133699815 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1214.1726694543, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, name=ubi9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.29.0)
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.316 189283 DEBUG nova.compute.provider_tree [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.331 189283 DEBUG nova.scheduler.client.report [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.356 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.357 189283 DEBUG nova.compute.manager [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.400 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.417 189283 DEBUG nova.compute.manager [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.417 189283 DEBUG nova.network.neutron [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.437 189283 INFO nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.458 189283 DEBUG nova.compute.manager [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.565 189283 DEBUG nova.compute.manager [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.568 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.569 189283 INFO nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Creating image(s)
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.570 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Acquiring lock "/var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.570 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "/var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.571 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "/var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.601 189283 DEBUG oslo_concurrency.processutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.677 189283 DEBUG oslo_concurrency.processutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.679 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Acquiring lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.680 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.697 189283 DEBUG oslo_concurrency.processutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.757 189283 DEBUG nova.policy [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00203ee721e44cf0bbd263737b393460', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46d673f680e841bb84a2447a5bd69e58', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.779 189283 DEBUG oslo_concurrency.processutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.780 189283 DEBUG oslo_concurrency.processutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.841 189283 DEBUG oslo_concurrency.processutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905,backing_fmt=raw /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk 1073741824" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.844 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "6f27c3b74299e89bd51ef4292a29b048cf6b0905" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.845 189283 DEBUG oslo_concurrency.processutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.929 189283 DEBUG oslo_concurrency.processutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.931 189283 DEBUG nova.virt.disk.api [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Checking if we can resize image /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 20:15:50 compute-0 nova_compute[189279]: 2025-12-10 20:15:50.931 189283 DEBUG oslo_concurrency.processutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:15:51 compute-0 nova_compute[189279]: 2025-12-10 20:15:51.003 189283 DEBUG oslo_concurrency.processutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:15:51 compute-0 nova_compute[189279]: 2025-12-10 20:15:51.005 189283 DEBUG nova.virt.disk.api [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Cannot resize image /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 20:15:51 compute-0 nova_compute[189279]: 2025-12-10 20:15:51.005 189283 DEBUG nova.objects.instance [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lazy-loading 'migration_context' on Instance uuid 6d92fd7a-b7be-41bb-a2f4-d005ef181baf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:15:51 compute-0 nova_compute[189279]: 2025-12-10 20:15:51.025 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 20:15:51 compute-0 nova_compute[189279]: 2025-12-10 20:15:51.025 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Ensure instance console log exists: /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 20:15:51 compute-0 nova_compute[189279]: 2025-12-10 20:15:51.026 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:51 compute-0 nova_compute[189279]: 2025-12-10 20:15:51.027 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:51 compute-0 nova_compute[189279]: 2025-12-10 20:15:51.028 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:52 compute-0 nova_compute[189279]: 2025-12-10 20:15:52.015 189283 DEBUG nova.network.neutron [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Successfully created port: a48d3ed3-7e30-4488-b166-81c4c64bc0af _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 10 20:15:52 compute-0 nova_compute[189279]: 2025-12-10 20:15:52.060 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:53 compute-0 nova_compute[189279]: 2025-12-10 20:15:53.575 189283 DEBUG nova.network.neutron [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Successfully updated port: a48d3ed3-7e30-4488-b166-81c4c64bc0af _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 20:15:53 compute-0 nova_compute[189279]: 2025-12-10 20:15:53.593 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Acquiring lock "refresh_cache-6d92fd7a-b7be-41bb-a2f4-d005ef181baf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:15:53 compute-0 nova_compute[189279]: 2025-12-10 20:15:53.594 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Acquired lock "refresh_cache-6d92fd7a-b7be-41bb-a2f4-d005ef181baf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:15:53 compute-0 nova_compute[189279]: 2025-12-10 20:15:53.595 189283 DEBUG nova.network.neutron [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:15:53 compute-0 nova_compute[189279]: 2025-12-10 20:15:53.751 189283 DEBUG nova.compute.manager [req-bfe22b44-dc90-4c14-b3b9-357837b12bf6 req-a5d5eb37-e269-42fd-b33c-3b6ae52af54c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Received event network-changed-a48d3ed3-7e30-4488-b166-81c4c64bc0af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:15:53 compute-0 nova_compute[189279]: 2025-12-10 20:15:53.752 189283 DEBUG nova.compute.manager [req-bfe22b44-dc90-4c14-b3b9-357837b12bf6 req-a5d5eb37-e269-42fd-b33c-3b6ae52af54c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Refreshing instance network info cache due to event network-changed-a48d3ed3-7e30-4488-b166-81c4c64bc0af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:15:53 compute-0 nova_compute[189279]: 2025-12-10 20:15:53.753 189283 DEBUG oslo_concurrency.lockutils [req-bfe22b44-dc90-4c14-b3b9-357837b12bf6 req-a5d5eb37-e269-42fd-b33c-3b6ae52af54c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-6d92fd7a-b7be-41bb-a2f4-d005ef181baf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:15:53 compute-0 nova_compute[189279]: 2025-12-10 20:15:53.849 189283 DEBUG nova.network.neutron [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.063 189283 DEBUG nova.network.neutron [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Updating instance_info_cache with network_info: [{"id": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "address": "fa:16:3e:76:d3:5f", "network": {"id": "cf2a01cc-d40e-4a4b-917f-0cb626e4f369", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1702706878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d673f680e841bb84a2447a5bd69e58", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48d3ed3-7e", "ovs_interfaceid": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.090 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Releasing lock "refresh_cache-6d92fd7a-b7be-41bb-a2f4-d005ef181baf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.091 189283 DEBUG nova.compute.manager [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Instance network_info: |[{"id": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "address": "fa:16:3e:76:d3:5f", "network": {"id": "cf2a01cc-d40e-4a4b-917f-0cb626e4f369", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1702706878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d673f680e841bb84a2447a5bd69e58", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48d3ed3-7e", "ovs_interfaceid": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.091 189283 DEBUG oslo_concurrency.lockutils [req-bfe22b44-dc90-4c14-b3b9-357837b12bf6 req-a5d5eb37-e269-42fd-b33c-3b6ae52af54c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-6d92fd7a-b7be-41bb-a2f4-d005ef181baf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.092 189283 DEBUG nova.network.neutron [req-bfe22b44-dc90-4c14-b3b9-357837b12bf6 req-a5d5eb37-e269-42fd-b33c-3b6ae52af54c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Refreshing network info cache for port a48d3ed3-7e30-4488-b166-81c4c64bc0af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.096 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Start _get_guest_xml network_info=[{"id": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "address": "fa:16:3e:76:d3:5f", "network": {"id": "cf2a01cc-d40e-4a4b-917f-0cb626e4f369", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1702706878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d673f680e841bb84a2447a5bd69e58", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48d3ed3-7e", "ovs_interfaceid": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': '33b11153-486b-4d32-bc63-6b6a6ed0b704'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.111 189283 WARNING nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.124 189283 DEBUG nova.virt.libvirt.host [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.126 189283 DEBUG nova.virt.libvirt.host [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.141 189283 DEBUG nova.virt.libvirt.host [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.142 189283 DEBUG nova.virt.libvirt.host [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.144 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.144 189283 DEBUG nova.virt.hardware [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T20:11:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:11:59Z,direct_url=<?>,disk_format='qcow2',id=33b11153-486b-4d32-bc63-6b6a6ed0b704,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='fe518ea62a94467e823b2b1046c57a2e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:12:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.146 189283 DEBUG nova.virt.hardware [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.147 189283 DEBUG nova.virt.hardware [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.147 189283 DEBUG nova.virt.hardware [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.148 189283 DEBUG nova.virt.hardware [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.149 189283 DEBUG nova.virt.hardware [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.149 189283 DEBUG nova.virt.hardware [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.150 189283 DEBUG nova.virt.hardware [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.151 189283 DEBUG nova.virt.hardware [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.152 189283 DEBUG nova.virt.hardware [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.152 189283 DEBUG nova.virt.hardware [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.157 189283 DEBUG nova.virt.libvirt.vif [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1965064837',display_name='tempest-TestServerBasicOps-server-1965064837',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1965064837',id=15,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDtClPidoUZr/gtSoy5X2jhL9FQmmNoYQOtsoLtYl8uwQbfKuCTDOK3f56CVPEHz1hPGBkvjzXRTqQqDtjC90N1kV+iSfZ30g2TMqOHCMIVxd2yw0WwlCN2U3/wlr+LLIQ==',key_name='tempest-TestServerBasicOps-1376247009',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d673f680e841bb84a2447a5bd69e58',ramdisk_id='',reservation_id='r-u03yeqdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-487853199',owner_user_name='tempest-TestServerBasicOps-487853199-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:15:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='00203ee721e44cf0bbd263737b393460',uuid=6d92fd7a-b7be-41bb-a2f4-d005ef181baf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "address": "fa:16:3e:76:d3:5f", "network": {"id": "cf2a01cc-d40e-4a4b-917f-0cb626e4f369", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1702706878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d673f680e841bb84a2447a5bd69e58", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48d3ed3-7e", "ovs_interfaceid": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.158 189283 DEBUG nova.network.os_vif_util [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Converting VIF {"id": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "address": "fa:16:3e:76:d3:5f", "network": {"id": "cf2a01cc-d40e-4a4b-917f-0cb626e4f369", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1702706878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d673f680e841bb84a2447a5bd69e58", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48d3ed3-7e", "ovs_interfaceid": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.159 189283 DEBUG nova.network.os_vif_util [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:d3:5f,bridge_name='br-int',has_traffic_filtering=True,id=a48d3ed3-7e30-4488-b166-81c4c64bc0af,network=Network(cf2a01cc-d40e-4a4b-917f-0cb626e4f369),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa48d3ed3-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.160 189283 DEBUG nova.objects.instance [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d92fd7a-b7be-41bb-a2f4-d005ef181baf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.174 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] End _get_guest_xml xml=<domain type="kvm">
Dec 10 20:15:55 compute-0 nova_compute[189279]:   <uuid>6d92fd7a-b7be-41bb-a2f4-d005ef181baf</uuid>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   <name>instance-0000000f</name>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   <memory>131072</memory>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   <metadata>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <nova:name>tempest-TestServerBasicOps-server-1965064837</nova:name>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 20:15:55</nova:creationTime>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <nova:flavor name="m1.nano">
Dec 10 20:15:55 compute-0 nova_compute[189279]:         <nova:memory>128</nova:memory>
Dec 10 20:15:55 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 20:15:55 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 20:15:55 compute-0 nova_compute[189279]:         <nova:ephemeral>0</nova:ephemeral>
Dec 10 20:15:55 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 20:15:55 compute-0 nova_compute[189279]:         <nova:user uuid="00203ee721e44cf0bbd263737b393460">tempest-TestServerBasicOps-487853199-project-member</nova:user>
Dec 10 20:15:55 compute-0 nova_compute[189279]:         <nova:project uuid="46d673f680e841bb84a2447a5bd69e58">tempest-TestServerBasicOps-487853199</nova:project>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="33b11153-486b-4d32-bc63-6b6a6ed0b704"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 20:15:55 compute-0 nova_compute[189279]:         <nova:port uuid="a48d3ed3-7e30-4488-b166-81c4c64bc0af">
Dec 10 20:15:55 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   </metadata>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <system>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <entry name="serial">6d92fd7a-b7be-41bb-a2f4-d005ef181baf</entry>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <entry name="uuid">6d92fd7a-b7be-41bb-a2f4-d005ef181baf</entry>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     </system>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   <os>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   </os>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   <features>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <apic/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   </features>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   </clock>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   </cpu>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   <devices>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk.config"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:76:d3:5f"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <target dev="tapa48d3ed3-7e"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     </interface>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/console.log" append="off"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     </serial>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <video>
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     </video>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     </rng>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 20:15:55 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 20:15:55 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 20:15:55 compute-0 nova_compute[189279]:   </devices>
Dec 10 20:15:55 compute-0 nova_compute[189279]: </domain>
Dec 10 20:15:55 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.176 189283 DEBUG nova.compute.manager [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Preparing to wait for external event network-vif-plugged-a48d3ed3-7e30-4488-b166-81c4c64bc0af prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.177 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Acquiring lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.178 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.178 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.180 189283 DEBUG nova.virt.libvirt.vif [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1965064837',display_name='tempest-TestServerBasicOps-server-1965064837',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1965064837',id=15,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDtClPidoUZr/gtSoy5X2jhL9FQmmNoYQOtsoLtYl8uwQbfKuCTDOK3f56CVPEHz1hPGBkvjzXRTqQqDtjC90N1kV+iSfZ30g2TMqOHCMIVxd2yw0WwlCN2U3/wlr+LLIQ==',key_name='tempest-TestServerBasicOps-1376247009',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d673f680e841bb84a2447a5bd69e58',ramdisk_id='',reservation_id='r-u03yeqdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-487853199',owner_user_name='tempest-TestServerBasicOps-487853199-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:15:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='00203ee721e44cf0bbd263737b393460',uuid=6d92fd7a-b7be-41bb-a2f4-d005ef181baf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "address": "fa:16:3e:76:d3:5f", "network": {"id": "cf2a01cc-d40e-4a4b-917f-0cb626e4f369", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1702706878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d673f680e841bb84a2447a5bd69e58", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48d3ed3-7e", "ovs_interfaceid": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.180 189283 DEBUG nova.network.os_vif_util [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Converting VIF {"id": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "address": "fa:16:3e:76:d3:5f", "network": {"id": "cf2a01cc-d40e-4a4b-917f-0cb626e4f369", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1702706878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d673f680e841bb84a2447a5bd69e58", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48d3ed3-7e", "ovs_interfaceid": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.181 189283 DEBUG nova.network.os_vif_util [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:d3:5f,bridge_name='br-int',has_traffic_filtering=True,id=a48d3ed3-7e30-4488-b166-81c4c64bc0af,network=Network(cf2a01cc-d40e-4a4b-917f-0cb626e4f369),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa48d3ed3-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.182 189283 DEBUG os_vif [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:d3:5f,bridge_name='br-int',has_traffic_filtering=True,id=a48d3ed3-7e30-4488-b166-81c4c64bc0af,network=Network(cf2a01cc-d40e-4a4b-917f-0cb626e4f369),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa48d3ed3-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.183 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.183 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.184 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.190 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.191 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa48d3ed3-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.191 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa48d3ed3-7e, col_values=(('external_ids', {'iface-id': 'a48d3ed3-7e30-4488-b166-81c4c64bc0af', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:d3:5f', 'vm-uuid': '6d92fd7a-b7be-41bb-a2f4-d005ef181baf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.194 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:55 compute-0 NetworkManager[56238]: <info>  [1765397755.1968] manager: (tapa48d3ed3-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.198 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.210 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.212 189283 INFO os_vif [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:d3:5f,bridge_name='br-int',has_traffic_filtering=True,id=a48d3ed3-7e30-4488-b166-81c4c64bc0af,network=Network(cf2a01cc-d40e-4a4b-917f-0cb626e4f369),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa48d3ed3-7e')
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.351 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.352 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.352 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] No VIF found with MAC fa:16:3e:76:d3:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.353 189283 INFO nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Using config drive
Dec 10 20:15:55 compute-0 podman[251921]: 2025-12-10 20:15:55.358896572 +0000 UTC m=+0.096768678 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:15:55 compute-0 podman[251922]: 2025-12-10 20:15:55.376023365 +0000 UTC m=+0.105751110 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.775 189283 INFO nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Creating config drive at /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk.config
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.790 189283 DEBUG oslo_concurrency.processutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppw9xw1x8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:15:55 compute-0 nova_compute[189279]: 2025-12-10 20:15:55.944 189283 DEBUG oslo_concurrency.processutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppw9xw1x8" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:15:56 compute-0 kernel: tapa48d3ed3-7e: entered promiscuous mode
Dec 10 20:15:56 compute-0 NetworkManager[56238]: <info>  [1765397756.0781] manager: (tapa48d3ed3-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Dec 10 20:15:56 compute-0 ovn_controller[97701]: 2025-12-10T20:15:56Z|00190|binding|INFO|Claiming lport a48d3ed3-7e30-4488-b166-81c4c64bc0af for this chassis.
Dec 10 20:15:56 compute-0 ovn_controller[97701]: 2025-12-10T20:15:56Z|00191|binding|INFO|a48d3ed3-7e30-4488-b166-81c4c64bc0af: Claiming fa:16:3e:76:d3:5f 10.100.0.7
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.095 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.111 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:d3:5f 10.100.0.7'], port_security=['fa:16:3e:76:d3:5f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6d92fd7a-b7be-41bb-a2f4-d005ef181baf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf2a01cc-d40e-4a4b-917f-0cb626e4f369', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d673f680e841bb84a2447a5bd69e58', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8c59905a-1646-4b04-ac08-dd70a7ae7437 ac7e200f-9865-421f-96b9-20c05c927e99', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f054f73-22cb-4b6c-80c2-fbc673731e1f, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=a48d3ed3-7e30-4488-b166-81c4c64bc0af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.114 106564 INFO neutron.agent.ovn.metadata.agent [-] Port a48d3ed3-7e30-4488-b166-81c4c64bc0af in datapath cf2a01cc-d40e-4a4b-917f-0cb626e4f369 bound to our chassis
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.118 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf2a01cc-d40e-4a4b-917f-0cb626e4f369
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.147 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[b4453d48-950f-44e5-929e-5656ec3b69f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.148 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf2a01cc-d1 in ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.151 239384 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf2a01cc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.151 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[d14f3176-a008-4259-ae15-d2dcc61eb92b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.155 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[fb72b12a-f3ed-4e0a-8f78-97d95a350aeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 systemd-udevd[251983]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.180 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4b5646-9d4c-4b6c-ac36-31124d99ab8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 systemd-machined[155642]: New machine qemu-16-instance-0000000f.
Dec 10 20:15:56 compute-0 NetworkManager[56238]: <info>  [1765397756.1954] device (tapa48d3ed3-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.199 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:56 compute-0 NetworkManager[56238]: <info>  [1765397756.2013] device (tapa48d3ed3-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 20:15:56 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Dec 10 20:15:56 compute-0 ovn_controller[97701]: 2025-12-10T20:15:56Z|00192|binding|INFO|Setting lport a48d3ed3-7e30-4488-b166-81c4c64bc0af ovn-installed in OVS
Dec 10 20:15:56 compute-0 ovn_controller[97701]: 2025-12-10T20:15:56Z|00193|binding|INFO|Setting lport a48d3ed3-7e30-4488-b166-81c4c64bc0af up in Southbound
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.211 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.220 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c621b5c8-4fb1-4cba-86ad-d9c48317fe12]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.260 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[c84cd022-438c-4671-967f-4f61c66f72a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.271 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[a23a0690-3077-4578-9a19-ba47d985d67d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 NetworkManager[56238]: <info>  [1765397756.2736] manager: (tapcf2a01cc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Dec 10 20:15:56 compute-0 systemd-udevd[251986]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.317 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e9f6e3-9524-43f5-9b95-1b3c7fd2c463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.322 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f3e052-0ea1-4cd1-ba12-7b154b9a0e4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 NetworkManager[56238]: <info>  [1765397756.3543] device (tapcf2a01cc-d0): carrier: link connected
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.362 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[37c0cfaf-40c8-4174-be79-330b84d5e1e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.386 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[54afa4c5-b2e7-431e-b668-2dca4b5a622a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf2a01cc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:b7:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508501, 'reachable_time': 28486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252014, 'error': None, 'target': 'ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.409 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[7470b43d-d69a-47ce-9b70-e2c3e401b9b5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea3:b79a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508501, 'tstamp': 508501}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252015, 'error': None, 'target': 'ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.435 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[3a9d9267-11bb-470a-a7d3-2066131b5a79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf2a01cc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:b7:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508501, 'reachable_time': 28486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252016, 'error': None, 'target': 'ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.491 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[9ca8a7ba-8589-4a69-ad06-7b3190edd85b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.583 189283 DEBUG nova.compute.manager [req-f823c558-1ee9-4ff2-82e7-370a84be5771 req-e691c163-d2ba-4ded-a552-20e936abf5b6 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Received event network-vif-plugged-a48d3ed3-7e30-4488-b166-81c4c64bc0af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.584 189283 DEBUG oslo_concurrency.lockutils [req-f823c558-1ee9-4ff2-82e7-370a84be5771 req-e691c163-d2ba-4ded-a552-20e936abf5b6 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.585 189283 DEBUG oslo_concurrency.lockutils [req-f823c558-1ee9-4ff2-82e7-370a84be5771 req-e691c163-d2ba-4ded-a552-20e936abf5b6 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.585 189283 DEBUG oslo_concurrency.lockutils [req-f823c558-1ee9-4ff2-82e7-370a84be5771 req-e691c163-d2ba-4ded-a552-20e936abf5b6 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.586 189283 DEBUG nova.compute.manager [req-f823c558-1ee9-4ff2-82e7-370a84be5771 req-e691c163-d2ba-4ded-a552-20e936abf5b6 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Processing event network-vif-plugged-a48d3ed3-7e30-4488-b166-81c4c64bc0af _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.585 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[738e70a9-5626-4558-8b52-4a832616077e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.588 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf2a01cc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.588 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.589 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf2a01cc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.591 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:56 compute-0 kernel: tapcf2a01cc-d0: entered promiscuous mode
Dec 10 20:15:56 compute-0 NetworkManager[56238]: <info>  [1765397756.5924] manager: (tapcf2a01cc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.596 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.597 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf2a01cc-d0, col_values=(('external_ids', {'iface-id': '8291c7e4-dc68-408b-9b00-13c10902b8a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.599 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:56 compute-0 ovn_controller[97701]: 2025-12-10T20:15:56Z|00194|binding|INFO|Releasing lport 8291c7e4-dc68-408b-9b00-13c10902b8a7 from this chassis (sb_readonly=0)
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.600 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.601 106564 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf2a01cc-d40e-4a4b-917f-0cb626e4f369.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf2a01cc-d40e-4a4b-917f-0cb626e4f369.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.602 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e93250da-9d8e-4069-b731-c2c9ce840df8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.603 106564 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: global
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     log         /dev/log local0 debug
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     log-tag     haproxy-metadata-proxy-cf2a01cc-d40e-4a4b-917f-0cb626e4f369
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     user        root
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     group       root
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     maxconn     1024
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     pidfile     /var/lib/neutron/external/pids/cf2a01cc-d40e-4a4b-917f-0cb626e4f369.pid.haproxy
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     daemon
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: defaults
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     log global
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     mode http
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     option httplog
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     option dontlognull
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     option http-server-close
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     option forwardfor
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     retries                 3
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     timeout http-request    30s
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     timeout connect         30s
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     timeout client          32s
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     timeout server          32s
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     timeout http-keep-alive 30s
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: listen listener
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     bind 169.254.169.254:80
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     server metadata /var/lib/neutron/metadata_proxy
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:     http-request add-header X-OVN-Network-ID cf2a01cc-d40e-4a4b-917f-0cb626e4f369
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 10 20:15:56 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:56.605 106564 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369', 'env', 'PROCESS_TAG=haproxy-cf2a01cc-d40e-4a4b-917f-0cb626e4f369', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf2a01cc-d40e-4a4b-917f-0cb626e4f369.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.624 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.987 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397756.9868538, 6d92fd7a-b7be-41bb-a2f4-d005ef181baf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.989 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] VM Started (Lifecycle Event)
Dec 10 20:15:56 compute-0 nova_compute[189279]: 2025-12-10 20:15:56.993 189283 DEBUG nova.compute.manager [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.005 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.012 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.014 189283 INFO nova.virt.libvirt.driver [-] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Instance spawned successfully.
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.014 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.019 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.039 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.039 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.040 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.040 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.040 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.041 189283 DEBUG nova.virt.libvirt.driver [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.061 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.062 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397756.988755, 6d92fd7a-b7be-41bb-a2f4-d005ef181baf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.062 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] VM Paused (Lifecycle Event)
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.063 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.075 189283 DEBUG nova.network.neutron [req-bfe22b44-dc90-4c14-b3b9-357837b12bf6 req-a5d5eb37-e269-42fd-b33c-3b6ae52af54c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Updated VIF entry in instance network info cache for port a48d3ed3-7e30-4488-b166-81c4c64bc0af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.075 189283 DEBUG nova.network.neutron [req-bfe22b44-dc90-4c14-b3b9-357837b12bf6 req-a5d5eb37-e269-42fd-b33c-3b6ae52af54c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Updating instance_info_cache with network_info: [{"id": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "address": "fa:16:3e:76:d3:5f", "network": {"id": "cf2a01cc-d40e-4a4b-917f-0cb626e4f369", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1702706878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d673f680e841bb84a2447a5bd69e58", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48d3ed3-7e", "ovs_interfaceid": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.112 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.117 189283 DEBUG oslo_concurrency.lockutils [req-bfe22b44-dc90-4c14-b3b9-357837b12bf6 req-a5d5eb37-e269-42fd-b33c-3b6ae52af54c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-6d92fd7a-b7be-41bb-a2f4-d005ef181baf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.119 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397756.997903, 6d92fd7a-b7be-41bb-a2f4-d005ef181baf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.119 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] VM Resumed (Lifecycle Event)
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.129 189283 INFO nova.compute.manager [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Took 6.56 seconds to spawn the instance on the hypervisor.
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.129 189283 DEBUG nova.compute.manager [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.163 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.169 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:15:57 compute-0 podman[252054]: 2025-12-10 20:15:57.18093821 +0000 UTC m=+0.090270852 container create 97cc308d943c016475e032e6d05d660bf5a59adae33ee02041b2c5276712c65b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.207 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:15:57 compute-0 podman[252054]: 2025-12-10 20:15:57.13877754 +0000 UTC m=+0.048110222 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.228 189283 INFO nova.compute.manager [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Took 7.12 seconds to build instance.
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.256 189283 DEBUG oslo_concurrency.lockutils [None req-d66faf20-4c83-414a-9d0f-6210fac0a408 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:57 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:57.268 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:15:57 compute-0 nova_compute[189279]: 2025-12-10 20:15:57.274 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:15:57 compute-0 systemd[1]: Started libpod-conmon-97cc308d943c016475e032e6d05d660bf5a59adae33ee02041b2c5276712c65b.scope.
Dec 10 20:15:57 compute-0 systemd[1]: Started libcrun container.
Dec 10 20:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed457f465072817fcc38b2e7a0bdb30a9fd450d3b7697c73f558148ebc5e0862/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 10 20:15:57 compute-0 podman[252054]: 2025-12-10 20:15:57.355126388 +0000 UTC m=+0.264459040 container init 97cc308d943c016475e032e6d05d660bf5a59adae33ee02041b2c5276712c65b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:15:57 compute-0 podman[252054]: 2025-12-10 20:15:57.364679396 +0000 UTC m=+0.274012038 container start 97cc308d943c016475e032e6d05d660bf5a59adae33ee02041b2c5276712c65b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 10 20:15:57 compute-0 neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369[252068]: [NOTICE]   (252082) : New worker (252088) forked
Dec 10 20:15:57 compute-0 neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369[252068]: [NOTICE]   (252082) : Loading success.
Dec 10 20:15:57 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:57.468 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 20:15:57 compute-0 podman[252070]: 2025-12-10 20:15:57.48764725 +0000 UTC m=+0.181262920 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Dec 10 20:15:59 compute-0 nova_compute[189279]: 2025-12-10 20:15:59.215 189283 DEBUG nova.compute.manager [req-b5d422a5-ae2f-42e7-b62a-39aca222ec01 req-ac78af7b-b97a-4eeb-af22-d640fb049e36 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Received event network-vif-plugged-a48d3ed3-7e30-4488-b166-81c4c64bc0af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:15:59 compute-0 nova_compute[189279]: 2025-12-10 20:15:59.216 189283 DEBUG oslo_concurrency.lockutils [req-b5d422a5-ae2f-42e7-b62a-39aca222ec01 req-ac78af7b-b97a-4eeb-af22-d640fb049e36 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:15:59 compute-0 nova_compute[189279]: 2025-12-10 20:15:59.216 189283 DEBUG oslo_concurrency.lockutils [req-b5d422a5-ae2f-42e7-b62a-39aca222ec01 req-ac78af7b-b97a-4eeb-af22-d640fb049e36 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:15:59 compute-0 nova_compute[189279]: 2025-12-10 20:15:59.217 189283 DEBUG oslo_concurrency.lockutils [req-b5d422a5-ae2f-42e7-b62a-39aca222ec01 req-ac78af7b-b97a-4eeb-af22-d640fb049e36 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:15:59 compute-0 nova_compute[189279]: 2025-12-10 20:15:59.217 189283 DEBUG nova.compute.manager [req-b5d422a5-ae2f-42e7-b62a-39aca222ec01 req-ac78af7b-b97a-4eeb-af22-d640fb049e36 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] No waiting events found dispatching network-vif-plugged-a48d3ed3-7e30-4488-b166-81c4c64bc0af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:15:59 compute-0 nova_compute[189279]: 2025-12-10 20:15:59.217 189283 WARNING nova.compute.manager [req-b5d422a5-ae2f-42e7-b62a-39aca222ec01 req-ac78af7b-b97a-4eeb-af22-d640fb049e36 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Received unexpected event network-vif-plugged-a48d3ed3-7e30-4488-b166-81c4c64bc0af for instance with vm_state active and task_state None.
Dec 10 20:15:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:15:59.471 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:15:59 compute-0 nova_compute[189279]: 2025-12-10 20:15:59.710 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:15:59 compute-0 podman[203484]: time="2025-12-10T20:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:15:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30756 "" "Go-http-client/1.1"
Dec 10 20:15:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5272 "" "Go-http-client/1.1"
Dec 10 20:16:00 compute-0 nova_compute[189279]: 2025-12-10 20:16:00.021 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:00 compute-0 NetworkManager[56238]: <info>  [1765397760.0252] manager: (patch-provnet-585b0ffa-caa8-4b2b-92b9-d0501e2b5999-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Dec 10 20:16:00 compute-0 NetworkManager[56238]: <info>  [1765397760.0281] manager: (patch-br-int-to-provnet-585b0ffa-caa8-4b2b-92b9-d0501e2b5999): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Dec 10 20:16:00 compute-0 nova_compute[189279]: 2025-12-10 20:16:00.193 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:00 compute-0 ovn_controller[97701]: 2025-12-10T20:16:00Z|00195|binding|INFO|Releasing lport 8291c7e4-dc68-408b-9b00-13c10902b8a7 from this chassis (sb_readonly=0)
Dec 10 20:16:00 compute-0 ovn_controller[97701]: 2025-12-10T20:16:00Z|00196|binding|INFO|Releasing lport eedd7beb-1e55-4b8d-a932-7d0592d2e98a from this chassis (sb_readonly=0)
Dec 10 20:16:00 compute-0 nova_compute[189279]: 2025-12-10 20:16:00.232 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:00 compute-0 nova_compute[189279]: 2025-12-10 20:16:00.934 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:01 compute-0 nova_compute[189279]: 2025-12-10 20:16:01.313 189283 DEBUG nova.compute.manager [req-64dd3914-0306-41af-8421-3e900cc84506 req-0e43b63e-4368-4860-a797-5ebb8ec4e637 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Received event network-changed-a48d3ed3-7e30-4488-b166-81c4c64bc0af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:16:01 compute-0 nova_compute[189279]: 2025-12-10 20:16:01.314 189283 DEBUG nova.compute.manager [req-64dd3914-0306-41af-8421-3e900cc84506 req-0e43b63e-4368-4860-a797-5ebb8ec4e637 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Refreshing instance network info cache due to event network-changed-a48d3ed3-7e30-4488-b166-81c4c64bc0af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:16:01 compute-0 nova_compute[189279]: 2025-12-10 20:16:01.315 189283 DEBUG oslo_concurrency.lockutils [req-64dd3914-0306-41af-8421-3e900cc84506 req-0e43b63e-4368-4860-a797-5ebb8ec4e637 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-6d92fd7a-b7be-41bb-a2f4-d005ef181baf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:16:01 compute-0 nova_compute[189279]: 2025-12-10 20:16:01.316 189283 DEBUG oslo_concurrency.lockutils [req-64dd3914-0306-41af-8421-3e900cc84506 req-0e43b63e-4368-4860-a797-5ebb8ec4e637 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-6d92fd7a-b7be-41bb-a2f4-d005ef181baf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:16:01 compute-0 nova_compute[189279]: 2025-12-10 20:16:01.317 189283 DEBUG nova.network.neutron [req-64dd3914-0306-41af-8421-3e900cc84506 req-0e43b63e-4368-4860-a797-5ebb8ec4e637 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Refreshing network info cache for port a48d3ed3-7e30-4488-b166-81c4c64bc0af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:16:01 compute-0 openstack_network_exporter[205632]: ERROR   20:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:16:01 compute-0 openstack_network_exporter[205632]: ERROR   20:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:16:01 compute-0 openstack_network_exporter[205632]: ERROR   20:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:16:01 compute-0 openstack_network_exporter[205632]: ERROR   20:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:16:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:16:01 compute-0 openstack_network_exporter[205632]: ERROR   20:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:16:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:16:02 compute-0 nova_compute[189279]: 2025-12-10 20:16:02.064 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:02 compute-0 podman[252107]: 2025-12-10 20:16:02.133008941 +0000 UTC m=+0.105145373 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_id=edpm)
Dec 10 20:16:02 compute-0 nova_compute[189279]: 2025-12-10 20:16:02.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:16:02 compute-0 nova_compute[189279]: 2025-12-10 20:16:02.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:16:03 compute-0 nova_compute[189279]: 2025-12-10 20:16:03.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:16:03 compute-0 nova_compute[189279]: 2025-12-10 20:16:03.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:16:03 compute-0 nova_compute[189279]: 2025-12-10 20:16:03.534 189283 DEBUG nova.network.neutron [req-64dd3914-0306-41af-8421-3e900cc84506 req-0e43b63e-4368-4860-a797-5ebb8ec4e637 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Updated VIF entry in instance network info cache for port a48d3ed3-7e30-4488-b166-81c4c64bc0af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:16:03 compute-0 nova_compute[189279]: 2025-12-10 20:16:03.535 189283 DEBUG nova.network.neutron [req-64dd3914-0306-41af-8421-3e900cc84506 req-0e43b63e-4368-4860-a797-5ebb8ec4e637 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Updating instance_info_cache with network_info: [{"id": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "address": "fa:16:3e:76:d3:5f", "network": {"id": "cf2a01cc-d40e-4a4b-917f-0cb626e4f369", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1702706878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d673f680e841bb84a2447a5bd69e58", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48d3ed3-7e", "ovs_interfaceid": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:16:03 compute-0 nova_compute[189279]: 2025-12-10 20:16:03.573 189283 DEBUG oslo_concurrency.lockutils [req-64dd3914-0306-41af-8421-3e900cc84506 req-0e43b63e-4368-4860-a797-5ebb8ec4e637 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-6d92fd7a-b7be-41bb-a2f4-d005ef181baf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:16:03 compute-0 nova_compute[189279]: 2025-12-10 20:16:03.984 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:04 compute-0 nova_compute[189279]: 2025-12-10 20:16:04.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:16:04 compute-0 nova_compute[189279]: 2025-12-10 20:16:04.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:16:04 compute-0 nova_compute[189279]: 2025-12-10 20:16:04.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:16:04 compute-0 nova_compute[189279]: 2025-12-10 20:16:04.744 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:16:04 compute-0 nova_compute[189279]: 2025-12-10 20:16:04.745 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:16:04 compute-0 nova_compute[189279]: 2025-12-10 20:16:04.745 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:16:04 compute-0 nova_compute[189279]: 2025-12-10 20:16:04.746 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ca7daa1b-94a2-4e08-902b-73be0ab83974 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:16:05 compute-0 nova_compute[189279]: 2025-12-10 20:16:05.198 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.097 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updating instance_info_cache with network_info: [{"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.117 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.118 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.519 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.520 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.521 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.521 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.627 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.693 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.694 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.763 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.771 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.831 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.832 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:16:06 compute-0 nova_compute[189279]: 2025-12-10 20:16:06.894 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:16:07 compute-0 nova_compute[189279]: 2025-12-10 20:16:07.066 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:07 compute-0 nova_compute[189279]: 2025-12-10 20:16:07.264 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:16:07 compute-0 nova_compute[189279]: 2025-12-10 20:16:07.265 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4962MB free_disk=72.26676940917969GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:16:07 compute-0 nova_compute[189279]: 2025-12-10 20:16:07.266 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:16:07 compute-0 nova_compute[189279]: 2025-12-10 20:16:07.266 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:16:07 compute-0 nova_compute[189279]: 2025-12-10 20:16:07.512 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:16:07 compute-0 nova_compute[189279]: 2025-12-10 20:16:07.513 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 6d92fd7a-b7be-41bb-a2f4-d005ef181baf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:16:07 compute-0 nova_compute[189279]: 2025-12-10 20:16:07.514 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:16:07 compute-0 nova_compute[189279]: 2025-12-10 20:16:07.515 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:16:07 compute-0 nova_compute[189279]: 2025-12-10 20:16:07.591 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:16:07 compute-0 nova_compute[189279]: 2025-12-10 20:16:07.609 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:16:07 compute-0 nova_compute[189279]: 2025-12-10 20:16:07.633 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:16:07 compute-0 nova_compute[189279]: 2025-12-10 20:16:07.633 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:16:10 compute-0 nova_compute[189279]: 2025-12-10 20:16:10.203 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:12 compute-0 nova_compute[189279]: 2025-12-10 20:16:12.068 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:13 compute-0 podman[252149]: 2025-12-10 20:16:13.12969377 +0000 UTC m=+0.090769985 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Dec 10 20:16:13 compute-0 podman[252148]: 2025-12-10 20:16:13.150512291 +0000 UTC m=+0.114437783 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 20:16:14 compute-0 nova_compute[189279]: 2025-12-10 20:16:14.634 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:16:15 compute-0 nova_compute[189279]: 2025-12-10 20:16:15.207 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:16 compute-0 ovn_controller[97701]: 2025-12-10T20:16:16Z|00197|binding|INFO|Releasing lport 8291c7e4-dc68-408b-9b00-13c10902b8a7 from this chassis (sb_readonly=0)
Dec 10 20:16:16 compute-0 ovn_controller[97701]: 2025-12-10T20:16:16Z|00198|binding|INFO|Releasing lport eedd7beb-1e55-4b8d-a932-7d0592d2e98a from this chassis (sb_readonly=0)
Dec 10 20:16:16 compute-0 nova_compute[189279]: 2025-12-10 20:16:16.503 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:17 compute-0 nova_compute[189279]: 2025-12-10 20:16:17.072 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:20 compute-0 nova_compute[189279]: 2025-12-10 20:16:20.212 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:21 compute-0 podman[252190]: 2025-12-10 20:16:21.140227139 +0000 UTC m=+0.103103318 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 10 20:16:21 compute-0 podman[252196]: 2025-12-10 20:16:21.151823961 +0000 UTC m=+0.098373750 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, vcs-type=git, version=9.4, release=1214.1726694543, io.openshift.tags=base rhel9, architecture=x86_64, container_name=kepler, io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 10 20:16:21 compute-0 podman[252191]: 2025-12-10 20:16:21.168159721 +0000 UTC m=+0.122190631 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:16:22 compute-0 nova_compute[189279]: 2025-12-10 20:16:22.076 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:16:23.399 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:16:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:16:23.400 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:16:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:16:23.401 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:16:25 compute-0 nova_compute[189279]: 2025-12-10 20:16:25.218 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:26 compute-0 podman[252246]: 2025-12-10 20:16:26.12149009 +0000 UTC m=+0.092152963 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 10 20:16:26 compute-0 podman[252247]: 2025-12-10 20:16:26.158337222 +0000 UTC m=+0.116838408 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:16:27 compute-0 nova_compute[189279]: 2025-12-10 20:16:27.079 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:28 compute-0 podman[252289]: 2025-12-10 20:16:28.183883997 +0000 UTC m=+0.152481218 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 10 20:16:29 compute-0 podman[203484]: time="2025-12-10T20:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:16:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30756 "" "Go-http-client/1.1"
Dec 10 20:16:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5270 "" "Go-http-client/1.1"
Dec 10 20:16:30 compute-0 nova_compute[189279]: 2025-12-10 20:16:30.223 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:31 compute-0 openstack_network_exporter[205632]: ERROR   20:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:16:31 compute-0 openstack_network_exporter[205632]: ERROR   20:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:16:31 compute-0 openstack_network_exporter[205632]: ERROR   20:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:16:31 compute-0 openstack_network_exporter[205632]: ERROR   20:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:16:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:16:31 compute-0 openstack_network_exporter[205632]: ERROR   20:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:16:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:16:32 compute-0 sshd-session[252329]: Invalid user solv from 80.94.92.184 port 34324
Dec 10 20:16:32 compute-0 nova_compute[189279]: 2025-12-10 20:16:32.082 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:32 compute-0 ovn_controller[97701]: 2025-12-10T20:16:32Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:d3:5f 10.100.0.7
Dec 10 20:16:32 compute-0 ovn_controller[97701]: 2025-12-10T20:16:32Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:d3:5f 10.100.0.7
Dec 10 20:16:32 compute-0 sshd-session[252329]: Connection closed by invalid user solv 80.94.92.184 port 34324 [preauth]
Dec 10 20:16:33 compute-0 podman[252331]: 2025-12-10 20:16:33.137236735 +0000 UTC m=+0.106473819 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 10 20:16:35 compute-0 nova_compute[189279]: 2025-12-10 20:16:35.230 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:37 compute-0 nova_compute[189279]: 2025-12-10 20:16:37.089 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:40 compute-0 nova_compute[189279]: 2025-12-10 20:16:40.234 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:42 compute-0 nova_compute[189279]: 2025-12-10 20:16:42.093 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:44 compute-0 podman[252352]: 2025-12-10 20:16:44.11604964 +0000 UTC m=+0.086368548 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 20:16:44 compute-0 podman[252353]: 2025-12-10 20:16:44.124508077 +0000 UTC m=+0.087127988 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 10 20:16:45 compute-0 nova_compute[189279]: 2025-12-10 20:16:45.239 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:47 compute-0 nova_compute[189279]: 2025-12-10 20:16:47.096 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:47 compute-0 ovn_controller[97701]: 2025-12-10T20:16:47Z|00199|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Dec 10 20:16:50 compute-0 nova_compute[189279]: 2025-12-10 20:16:50.244 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:52 compute-0 podman[252394]: 2025-12-10 20:16:52.093097916 +0000 UTC m=+0.062845293 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:16:52 compute-0 nova_compute[189279]: 2025-12-10 20:16:52.099 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:52 compute-0 podman[252395]: 2025-12-10 20:16:52.12924904 +0000 UTC m=+0.094922758 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 10 20:16:52 compute-0 podman[252396]: 2025-12-10 20:16:52.134046299 +0000 UTC m=+0.088519805 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, com.redhat.component=ubi9-container, release-0.7.12=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, architecture=x86_64)
Dec 10 20:16:55 compute-0 nova_compute[189279]: 2025-12-10 20:16:55.250 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:57 compute-0 nova_compute[189279]: 2025-12-10 20:16:57.100 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:16:57 compute-0 podman[252449]: 2025-12-10 20:16:57.105954248 +0000 UTC m=+0.070570241 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 20:16:57 compute-0 podman[252448]: 2025-12-10 20:16:57.122223907 +0000 UTC m=+0.092992577 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:16:59 compute-0 podman[252494]: 2025-12-10 20:16:59.178538919 +0000 UTC m=+0.156761783 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 10 20:16:59 compute-0 podman[203484]: time="2025-12-10T20:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:16:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30756 "" "Go-http-client/1.1"
Dec 10 20:16:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5268 "" "Go-http-client/1.1"
Dec 10 20:17:00 compute-0 nova_compute[189279]: 2025-12-10 20:17:00.259 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:01 compute-0 openstack_network_exporter[205632]: ERROR   20:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:17:01 compute-0 openstack_network_exporter[205632]: ERROR   20:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:17:01 compute-0 openstack_network_exporter[205632]: ERROR   20:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:17:01 compute-0 openstack_network_exporter[205632]: ERROR   20:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:17:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:17:01 compute-0 openstack_network_exporter[205632]: ERROR   20:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:17:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:17:01 compute-0 nova_compute[189279]: 2025-12-10 20:17:01.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:17:02 compute-0 nova_compute[189279]: 2025-12-10 20:17:02.103 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:02 compute-0 nova_compute[189279]: 2025-12-10 20:17:02.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:17:04 compute-0 podman[252520]: 2025-12-10 20:17:04.180486617 +0000 UTC m=+0.147596016 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec 10 20:17:04 compute-0 nova_compute[189279]: 2025-12-10 20:17:04.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:17:05 compute-0 nova_compute[189279]: 2025-12-10 20:17:05.267 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:05.418 106671 DEBUG eventlet.wsgi.server [-] (106671) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Dec 10 20:17:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:05.421 106671 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Dec 10 20:17:05 compute-0 ovn_metadata_agent[106559]: Accept: */*
Dec 10 20:17:05 compute-0 ovn_metadata_agent[106559]: Connection: close
Dec 10 20:17:05 compute-0 ovn_metadata_agent[106559]: Content-Type: text/plain
Dec 10 20:17:05 compute-0 ovn_metadata_agent[106559]: Host: 169.254.169.254
Dec 10 20:17:05 compute-0 ovn_metadata_agent[106559]: User-Agent: curl/7.84.0
Dec 10 20:17:05 compute-0 ovn_metadata_agent[106559]: X-Forwarded-For: 10.100.0.7
Dec 10 20:17:05 compute-0 ovn_metadata_agent[106559]: X-Ovn-Network-Id: cf2a01cc-d40e-4a4b-917f-0cb626e4f369 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Dec 10 20:17:05 compute-0 nova_compute[189279]: 2025-12-10 20:17:05.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:17:05 compute-0 nova_compute[189279]: 2025-12-10 20:17:05.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:17:06 compute-0 nova_compute[189279]: 2025-12-10 20:17:06.490 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:17:06 compute-0 nova_compute[189279]: 2025-12-10 20:17:06.492 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:06.577 106671 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:06.577 106671 INFO eventlet.wsgi.server [-] 10.100.0.7,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.1566465
Dec 10 20:17:06 compute-0 haproxy-metadata-proxy-cf2a01cc-d40e-4a4b-917f-0cb626e4f369[252088]: 10.100.0.7:49640 [10/Dec/2025:20:17:05.415] listener listener/metadata 0/0/0/1162/1162 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:06.719 106671 DEBUG eventlet.wsgi.server [-] (106671) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:06.720 106671 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: Accept: */*
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: Connection: close
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: Content-Length: 100
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: Content-Type: application/x-www-form-urlencoded
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: Host: 169.254.169.254
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: User-Agent: curl/7.84.0
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: X-Forwarded-For: 10.100.0.7
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: X-Ovn-Network-Id: cf2a01cc-d40e-4a4b-917f-0cb626e4f369
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: 
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:06.920 106671 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Dec 10 20:17:06 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:06.921 106671 INFO eventlet.wsgi.server [-] 10.100.0.7,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2010818
Dec 10 20:17:06 compute-0 haproxy-metadata-proxy-cf2a01cc-d40e-4a4b-917f-0cb626e4f369[252088]: 10.100.0.7:49656 [10/Dec/2025:20:17:06.717] listener listener/metadata 0/0/0/204/204 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Dec 10 20:17:07 compute-0 nova_compute[189279]: 2025-12-10 20:17:07.053 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-6d92fd7a-b7be-41bb-a2f4-d005ef181baf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:17:07 compute-0 nova_compute[189279]: 2025-12-10 20:17:07.054 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-6d92fd7a-b7be-41bb-a2f4-d005ef181baf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:17:07 compute-0 nova_compute[189279]: 2025-12-10 20:17:07.055 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:17:07 compute-0 nova_compute[189279]: 2025-12-10 20:17:07.107 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.567 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Updating instance_info_cache with network_info: [{"id": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "address": "fa:16:3e:76:d3:5f", "network": {"id": "cf2a01cc-d40e-4a4b-917f-0cb626e4f369", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1702706878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d673f680e841bb84a2447a5bd69e58", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48d3ed3-7e", "ovs_interfaceid": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.588 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-6d92fd7a-b7be-41bb-a2f4-d005ef181baf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.589 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.591 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.592 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.593 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.619 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.621 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.624 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.625 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.711 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.805 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.806 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.873 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.889 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.958 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:17:08 compute-0 nova_compute[189279]: 2025-12-10 20:17:08.959 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.023 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.199 189283 DEBUG oslo_concurrency.lockutils [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Acquiring lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.204 189283 DEBUG oslo_concurrency.lockutils [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.205 189283 DEBUG oslo_concurrency.lockutils [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Acquiring lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.206 189283 DEBUG oslo_concurrency.lockutils [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.206 189283 DEBUG oslo_concurrency.lockutils [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.208 189283 INFO nova.compute.manager [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Terminating instance
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.210 189283 DEBUG nova.compute.manager [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:17:09 compute-0 kernel: tapa48d3ed3-7e (unregistering): left promiscuous mode
Dec 10 20:17:09 compute-0 NetworkManager[56238]: <info>  [1765397829.2578] device (tapa48d3ed3-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.271 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:09 compute-0 ovn_controller[97701]: 2025-12-10T20:17:09Z|00200|binding|INFO|Releasing lport a48d3ed3-7e30-4488-b166-81c4c64bc0af from this chassis (sb_readonly=0)
Dec 10 20:17:09 compute-0 ovn_controller[97701]: 2025-12-10T20:17:09Z|00201|binding|INFO|Setting lport a48d3ed3-7e30-4488-b166-81c4c64bc0af down in Southbound
Dec 10 20:17:09 compute-0 ovn_controller[97701]: 2025-12-10T20:17:09Z|00202|binding|INFO|Removing iface tapa48d3ed3-7e ovn-installed in OVS
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.276 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.286 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:d3:5f 10.100.0.7'], port_security=['fa:16:3e:76:d3:5f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6d92fd7a-b7be-41bb-a2f4-d005ef181baf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf2a01cc-d40e-4a4b-917f-0cb626e4f369', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d673f680e841bb84a2447a5bd69e58', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8c59905a-1646-4b04-ac08-dd70a7ae7437 ac7e200f-9865-421f-96b9-20c05c927e99', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f054f73-22cb-4b6c-80c2-fbc673731e1f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=a48d3ed3-7e30-4488-b166-81c4c64bc0af) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.287 106564 INFO neutron.agent.ovn.metadata.agent [-] Port a48d3ed3-7e30-4488-b166-81c4c64bc0af in datapath cf2a01cc-d40e-4a4b-917f-0cb626e4f369 unbound from our chassis
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.290 106564 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf2a01cc-d40e-4a4b-917f-0cb626e4f369, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.288 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.295 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[db067438-22ec-4b7f-b64f-7279a54ff500]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.297 106564 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369 namespace which is not needed anymore
Dec 10 20:17:09 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec 10 20:17:09 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 44.203s CPU time.
Dec 10 20:17:09 compute-0 systemd-machined[155642]: Machine qemu-16-instance-0000000f terminated.
Dec 10 20:17:09 compute-0 kernel: tapa48d3ed3-7e: entered promiscuous mode
Dec 10 20:17:09 compute-0 NetworkManager[56238]: <info>  [1765397829.4446] manager: (tapa48d3ed3-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Dec 10 20:17:09 compute-0 ovn_controller[97701]: 2025-12-10T20:17:09Z|00203|binding|INFO|Claiming lport a48d3ed3-7e30-4488-b166-81c4c64bc0af for this chassis.
Dec 10 20:17:09 compute-0 ovn_controller[97701]: 2025-12-10T20:17:09Z|00204|binding|INFO|a48d3ed3-7e30-4488-b166-81c4c64bc0af: Claiming fa:16:3e:76:d3:5f 10.100.0.7
Dec 10 20:17:09 compute-0 kernel: tapa48d3ed3-7e (unregistering): left promiscuous mode
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.450 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.460 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:d3:5f 10.100.0.7'], port_security=['fa:16:3e:76:d3:5f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6d92fd7a-b7be-41bb-a2f4-d005ef181baf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf2a01cc-d40e-4a4b-917f-0cb626e4f369', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d673f680e841bb84a2447a5bd69e58', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8c59905a-1646-4b04-ac08-dd70a7ae7437 ac7e200f-9865-421f-96b9-20c05c927e99', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f054f73-22cb-4b6c-80c2-fbc673731e1f, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=a48d3ed3-7e30-4488-b166-81c4c64bc0af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:17:09 compute-0 ovn_controller[97701]: 2025-12-10T20:17:09Z|00205|binding|INFO|Setting lport a48d3ed3-7e30-4488-b166-81c4c64bc0af ovn-installed in OVS
Dec 10 20:17:09 compute-0 ovn_controller[97701]: 2025-12-10T20:17:09Z|00206|binding|INFO|Setting lport a48d3ed3-7e30-4488-b166-81c4c64bc0af up in Southbound
Dec 10 20:17:09 compute-0 ovn_controller[97701]: 2025-12-10T20:17:09Z|00207|binding|INFO|Releasing lport a48d3ed3-7e30-4488-b166-81c4c64bc0af from this chassis (sb_readonly=1)
Dec 10 20:17:09 compute-0 ovn_controller[97701]: 2025-12-10T20:17:09Z|00208|if_status|INFO|Dropped 2 log messages in last 735 seconds (most recently, 735 seconds ago) due to excessive rate
Dec 10 20:17:09 compute-0 ovn_controller[97701]: 2025-12-10T20:17:09Z|00209|if_status|INFO|Not setting lport a48d3ed3-7e30-4488-b166-81c4c64bc0af down as sb is readonly
Dec 10 20:17:09 compute-0 ovn_controller[97701]: 2025-12-10T20:17:09Z|00210|binding|INFO|Removing iface tapa48d3ed3-7e ovn-installed in OVS
Dec 10 20:17:09 compute-0 ovn_controller[97701]: 2025-12-10T20:17:09Z|00211|binding|INFO|Releasing lport a48d3ed3-7e30-4488-b166-81c4c64bc0af from this chassis (sb_readonly=1)
Dec 10 20:17:09 compute-0 ovn_controller[97701]: 2025-12-10T20:17:09Z|00212|binding|INFO|Setting lport a48d3ed3-7e30-4488-b166-81c4c64bc0af down in Southbound
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.481 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.489 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:d3:5f 10.100.0.7'], port_security=['fa:16:3e:76:d3:5f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6d92fd7a-b7be-41bb-a2f4-d005ef181baf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf2a01cc-d40e-4a4b-917f-0cb626e4f369', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d673f680e841bb84a2447a5bd69e58', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8c59905a-1646-4b04-ac08-dd70a7ae7437 ac7e200f-9865-421f-96b9-20c05c927e99', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f054f73-22cb-4b6c-80c2-fbc673731e1f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=a48d3ed3-7e30-4488-b166-81c4c64bc0af) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.501 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.532 189283 INFO nova.virt.libvirt.driver [-] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Instance destroyed successfully.
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.532 189283 DEBUG nova.objects.instance [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lazy-loading 'resources' on Instance uuid 6d92fd7a-b7be-41bb-a2f4-d005ef181baf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:17:09 compute-0 neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369[252068]: [NOTICE]   (252082) : haproxy version is 2.8.14-c23fe91
Dec 10 20:17:09 compute-0 neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369[252068]: [NOTICE]   (252082) : path to executable is /usr/sbin/haproxy
Dec 10 20:17:09 compute-0 neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369[252068]: [WARNING]  (252082) : Exiting Master process...
Dec 10 20:17:09 compute-0 neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369[252068]: [ALERT]    (252082) : Current worker (252088) exited with code 143 (Terminated)
Dec 10 20:17:09 compute-0 neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369[252068]: [WARNING]  (252082) : All workers exited. Exiting... (0)
Dec 10 20:17:09 compute-0 systemd[1]: libpod-97cc308d943c016475e032e6d05d660bf5a59adae33ee02041b2c5276712c65b.scope: Deactivated successfully.
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.548 189283 DEBUG nova.virt.libvirt.vif [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1965064837',display_name='tempest-TestServerBasicOps-server-1965064837',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1965064837',id=15,image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDtClPidoUZr/gtSoy5X2jhL9FQmmNoYQOtsoLtYl8uwQbfKuCTDOK3f56CVPEHz1hPGBkvjzXRTqQqDtjC90N1kV+iSfZ30g2TMqOHCMIVxd2yw0WwlCN2U3/wlr+LLIQ==',key_name='tempest-TestServerBasicOps-1376247009',keypairs=<?>,launch_index=0,launched_at=2025-12-10T20:15:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46d673f680e841bb84a2447a5bd69e58',ramdisk_id='',reservation_id='r-u03yeqdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='33b11153-486b-4d32-bc63-6b6a6ed0b704',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-487853199',owner_user_name='tempest-TestServerBasicOps-487853199-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T20:17:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='00203ee721e44cf0bbd263737b393460',uuid=6d92fd7a-b7be-41bb-a2f4-d005ef181baf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "address": "fa:16:3e:76:d3:5f", "network": {"id": "cf2a01cc-d40e-4a4b-917f-0cb626e4f369", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1702706878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d673f680e841bb84a2447a5bd69e58", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48d3ed3-7e", "ovs_interfaceid": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:17:09 compute-0 podman[252573]: 2025-12-10 20:17:09.549110741 +0000 UTC m=+0.103607932 container died 97cc308d943c016475e032e6d05d660bf5a59adae33ee02041b2c5276712c65b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.550 189283 DEBUG nova.network.os_vif_util [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Converting VIF {"id": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "address": "fa:16:3e:76:d3:5f", "network": {"id": "cf2a01cc-d40e-4a4b-917f-0cb626e4f369", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1702706878-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d673f680e841bb84a2447a5bd69e58", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa48d3ed3-7e", "ovs_interfaceid": "a48d3ed3-7e30-4488-b166-81c4c64bc0af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.552 189283 DEBUG nova.network.os_vif_util [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:d3:5f,bridge_name='br-int',has_traffic_filtering=True,id=a48d3ed3-7e30-4488-b166-81c4c64bc0af,network=Network(cf2a01cc-d40e-4a4b-917f-0cb626e4f369),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa48d3ed3-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.553 189283 DEBUG os_vif [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:d3:5f,bridge_name='br-int',has_traffic_filtering=True,id=a48d3ed3-7e30-4488-b166-81c4c64bc0af,network=Network(cf2a01cc-d40e-4a4b-917f-0cb626e4f369),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa48d3ed3-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.555 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.556 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa48d3ed3-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.558 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.560 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.566 189283 INFO os_vif [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:d3:5f,bridge_name='br-int',has_traffic_filtering=True,id=a48d3ed3-7e30-4488-b166-81c4c64bc0af,network=Network(cf2a01cc-d40e-4a4b-917f-0cb626e4f369),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa48d3ed3-7e')
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.567 189283 INFO nova.virt.libvirt.driver [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Deleting instance files /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf_del
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.568 189283 INFO nova.virt.libvirt.driver [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Deletion of /var/lib/nova/instances/6d92fd7a-b7be-41bb-a2f4-d005ef181baf_del complete
Dec 10 20:17:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97cc308d943c016475e032e6d05d660bf5a59adae33ee02041b2c5276712c65b-userdata-shm.mount: Deactivated successfully.
Dec 10 20:17:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed457f465072817fcc38b2e7a0bdb30a9fd450d3b7697c73f558148ebc5e0862-merged.mount: Deactivated successfully.
Dec 10 20:17:09 compute-0 podman[252573]: 2025-12-10 20:17:09.600196527 +0000 UTC m=+0.154693718 container cleanup 97cc308d943c016475e032e6d05d660bf5a59adae33ee02041b2c5276712c65b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.628 189283 INFO nova.compute.manager [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Took 0.42 seconds to destroy the instance on the hypervisor.
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.629 189283 DEBUG oslo.service.loopingcall [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.630 189283 DEBUG nova.compute.manager [-] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.631 189283 DEBUG nova.network.neutron [-] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:17:09 compute-0 systemd[1]: libpod-conmon-97cc308d943c016475e032e6d05d660bf5a59adae33ee02041b2c5276712c65b.scope: Deactivated successfully.
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.678 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.679 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5018MB free_disk=72.23860931396484GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.680 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.680 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:17:09 compute-0 podman[252614]: 2025-12-10 20:17:09.700673473 +0000 UTC m=+0.062611647 container remove 97cc308d943c016475e032e6d05d660bf5a59adae33ee02041b2c5276712c65b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.711 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e678a4-8c8e-4844-9104-4d50799e4f54]: (4, ('Wed Dec 10 08:17:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369 (97cc308d943c016475e032e6d05d660bf5a59adae33ee02041b2c5276712c65b)\n97cc308d943c016475e032e6d05d660bf5a59adae33ee02041b2c5276712c65b\nWed Dec 10 08:17:09 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369 (97cc308d943c016475e032e6d05d660bf5a59adae33ee02041b2c5276712c65b)\n97cc308d943c016475e032e6d05d660bf5a59adae33ee02041b2c5276712c65b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.733 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[bbac4c2d-9c7f-44d7-972f-6f7489f2aa72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.734 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf2a01cc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.736 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:09 compute-0 kernel: tapcf2a01cc-d0: left promiscuous mode
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.751 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.757 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[1b69c73d-5161-4976-90ea-3bac93075a48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.761 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.762 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance 6d92fd7a-b7be-41bb-a2f4-d005ef181baf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.762 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.763 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.777 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[c323020a-5a3f-4b2c-b7ba-606e55f5dde2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.780 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f8cb31-3ac0-44da-8bd9-b0847feda30d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.798 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[3c92a79d-236e-4ea0-8282-a828a73ccc1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508490, 'reachable_time': 21050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252631, 'error': None, 'target': 'ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:17:09 compute-0 systemd[1]: run-netns-ovnmeta\x2dcf2a01cc\x2dd40e\x2d4a4b\x2d917f\x2d0cb626e4f369.mount: Deactivated successfully.
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.815 106676 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf2a01cc-d40e-4a4b-917f-0cb626e4f369 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.818 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[450b40a7-fdda-4bc7-9cca-60b019ebc10f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.820 106564 INFO neutron.agent.ovn.metadata.agent [-] Port a48d3ed3-7e30-4488-b166-81c4c64bc0af in datapath cf2a01cc-d40e-4a4b-917f-0cb626e4f369 unbound from our chassis
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.820 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.822 106564 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf2a01cc-d40e-4a4b-917f-0cb626e4f369, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.824 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed87538-3867-402f-b0a4-5b36bb8fb80f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.825 106564 INFO neutron.agent.ovn.metadata.agent [-] Port a48d3ed3-7e30-4488-b166-81c4c64bc0af in datapath cf2a01cc-d40e-4a4b-917f-0cb626e4f369 unbound from our chassis
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.826 106564 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf2a01cc-d40e-4a4b-917f-0cb626e4f369, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 10 20:17:09 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:09.827 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[650d85d9-32bb-466c-a904-10ef883787e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.833 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.852 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:17:09 compute-0 nova_compute[189279]: 2025-12-10 20:17:09.852 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:17:10 compute-0 nova_compute[189279]: 2025-12-10 20:17:10.948 189283 DEBUG nova.compute.manager [req-8c6cd7ec-402e-4545-8a35-fc346f84e03f req-595fbc96-2065-4fd6-bc19-c07feab8f47c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Received event network-vif-unplugged-a48d3ed3-7e30-4488-b166-81c4c64bc0af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:17:10 compute-0 nova_compute[189279]: 2025-12-10 20:17:10.952 189283 DEBUG oslo_concurrency.lockutils [req-8c6cd7ec-402e-4545-8a35-fc346f84e03f req-595fbc96-2065-4fd6-bc19-c07feab8f47c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:17:10 compute-0 nova_compute[189279]: 2025-12-10 20:17:10.953 189283 DEBUG oslo_concurrency.lockutils [req-8c6cd7ec-402e-4545-8a35-fc346f84e03f req-595fbc96-2065-4fd6-bc19-c07feab8f47c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:17:10 compute-0 nova_compute[189279]: 2025-12-10 20:17:10.954 189283 DEBUG oslo_concurrency.lockutils [req-8c6cd7ec-402e-4545-8a35-fc346f84e03f req-595fbc96-2065-4fd6-bc19-c07feab8f47c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:17:10 compute-0 nova_compute[189279]: 2025-12-10 20:17:10.955 189283 DEBUG nova.compute.manager [req-8c6cd7ec-402e-4545-8a35-fc346f84e03f req-595fbc96-2065-4fd6-bc19-c07feab8f47c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] No waiting events found dispatching network-vif-unplugged-a48d3ed3-7e30-4488-b166-81c4c64bc0af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:17:10 compute-0 nova_compute[189279]: 2025-12-10 20:17:10.955 189283 DEBUG nova.compute.manager [req-8c6cd7ec-402e-4545-8a35-fc346f84e03f req-595fbc96-2065-4fd6-bc19-c07feab8f47c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Received event network-vif-unplugged-a48d3ed3-7e30-4488-b166-81c4c64bc0af for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 10 20:17:11 compute-0 nova_compute[189279]: 2025-12-10 20:17:11.234 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:11 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:11.234 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:17:11 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:11.236 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 20:17:12 compute-0 nova_compute[189279]: 2025-12-10 20:17:12.110 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:12 compute-0 nova_compute[189279]: 2025-12-10 20:17:12.184 189283 DEBUG nova.network.neutron [-] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:17:12 compute-0 nova_compute[189279]: 2025-12-10 20:17:12.204 189283 INFO nova.compute.manager [-] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Took 2.57 seconds to deallocate network for instance.
Dec 10 20:17:12 compute-0 nova_compute[189279]: 2025-12-10 20:17:12.251 189283 DEBUG oslo_concurrency.lockutils [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:17:12 compute-0 nova_compute[189279]: 2025-12-10 20:17:12.253 189283 DEBUG oslo_concurrency.lockutils [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:17:12 compute-0 nova_compute[189279]: 2025-12-10 20:17:12.334 189283 DEBUG nova.compute.provider_tree [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:17:12 compute-0 nova_compute[189279]: 2025-12-10 20:17:12.352 189283 DEBUG nova.scheduler.client.report [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:17:12 compute-0 nova_compute[189279]: 2025-12-10 20:17:12.392 189283 DEBUG oslo_concurrency.lockutils [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:17:12 compute-0 nova_compute[189279]: 2025-12-10 20:17:12.423 189283 INFO nova.scheduler.client.report [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Deleted allocations for instance 6d92fd7a-b7be-41bb-a2f4-d005ef181baf
Dec 10 20:17:12 compute-0 nova_compute[189279]: 2025-12-10 20:17:12.496 189283 DEBUG oslo_concurrency.lockutils [None req-0cc9f70a-415c-4c5f-b86e-870b5c5adf23 00203ee721e44cf0bbd263737b393460 46d673f680e841bb84a2447a5bd69e58 - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:17:13 compute-0 nova_compute[189279]: 2025-12-10 20:17:13.026 189283 DEBUG nova.compute.manager [req-411095bb-0b3a-45bc-8697-89bccf981533 req-44faf249-0329-47f6-b564-57b6fd651a98 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Received event network-vif-plugged-a48d3ed3-7e30-4488-b166-81c4c64bc0af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:17:13 compute-0 nova_compute[189279]: 2025-12-10 20:17:13.027 189283 DEBUG oslo_concurrency.lockutils [req-411095bb-0b3a-45bc-8697-89bccf981533 req-44faf249-0329-47f6-b564-57b6fd651a98 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:17:13 compute-0 nova_compute[189279]: 2025-12-10 20:17:13.027 189283 DEBUG oslo_concurrency.lockutils [req-411095bb-0b3a-45bc-8697-89bccf981533 req-44faf249-0329-47f6-b564-57b6fd651a98 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:17:13 compute-0 nova_compute[189279]: 2025-12-10 20:17:13.028 189283 DEBUG oslo_concurrency.lockutils [req-411095bb-0b3a-45bc-8697-89bccf981533 req-44faf249-0329-47f6-b564-57b6fd651a98 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "6d92fd7a-b7be-41bb-a2f4-d005ef181baf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:17:13 compute-0 nova_compute[189279]: 2025-12-10 20:17:13.028 189283 DEBUG nova.compute.manager [req-411095bb-0b3a-45bc-8697-89bccf981533 req-44faf249-0329-47f6-b564-57b6fd651a98 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] No waiting events found dispatching network-vif-plugged-a48d3ed3-7e30-4488-b166-81c4c64bc0af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:17:13 compute-0 nova_compute[189279]: 2025-12-10 20:17:13.029 189283 WARNING nova.compute.manager [req-411095bb-0b3a-45bc-8697-89bccf981533 req-44faf249-0329-47f6-b564-57b6fd651a98 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Received unexpected event network-vif-plugged-a48d3ed3-7e30-4488-b166-81c4c64bc0af for instance with vm_state deleted and task_state None.
Dec 10 20:17:13 compute-0 nova_compute[189279]: 2025-12-10 20:17:13.029 189283 DEBUG nova.compute.manager [req-411095bb-0b3a-45bc-8697-89bccf981533 req-44faf249-0329-47f6-b564-57b6fd651a98 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Received event network-vif-deleted-a48d3ed3-7e30-4488-b166-81c4c64bc0af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:17:14 compute-0 nova_compute[189279]: 2025-12-10 20:17:14.559 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:14 compute-0 nova_compute[189279]: 2025-12-10 20:17:14.749 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:17:14 compute-0 nova_compute[189279]: 2025-12-10 20:17:14.775 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:17:14 compute-0 podman[252632]: 2025-12-10 20:17:14.791776933 +0000 UTC m=+0.096496270 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 20:17:14 compute-0 podman[252633]: 2025-12-10 20:17:14.821693268 +0000 UTC m=+0.120944718 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm)
Dec 10 20:17:16 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:16.240 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:17:17 compute-0 nova_compute[189279]: 2025-12-10 20:17:17.113 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:19 compute-0 ovn_controller[97701]: 2025-12-10T20:17:19Z|00213|binding|INFO|Releasing lport eedd7beb-1e55-4b8d-a932-7d0592d2e98a from this chassis (sb_readonly=0)
Dec 10 20:17:19 compute-0 nova_compute[189279]: 2025-12-10 20:17:19.541 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:19 compute-0 nova_compute[189279]: 2025-12-10 20:17:19.561 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:19 compute-0 ovn_controller[97701]: 2025-12-10T20:17:19Z|00214|binding|INFO|Releasing lport eedd7beb-1e55-4b8d-a932-7d0592d2e98a from this chassis (sb_readonly=0)
Dec 10 20:17:19 compute-0 nova_compute[189279]: 2025-12-10 20:17:19.770 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:22 compute-0 nova_compute[189279]: 2025-12-10 20:17:22.116 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:23 compute-0 podman[252679]: 2025-12-10 20:17:23.09410756 +0000 UTC m=+0.075900865 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec 10 20:17:23 compute-0 podman[252680]: 2025-12-10 20:17:23.10821229 +0000 UTC m=+0.083547631 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release=1214.1726694543, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, name=ubi9, config_id=edpm, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 20:17:23 compute-0 podman[252678]: 2025-12-10 20:17:23.109882135 +0000 UTC m=+0.096147460 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 10 20:17:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:23.400 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:17:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:23.401 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:17:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:17:23.402 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:17:24 compute-0 nova_compute[189279]: 2025-12-10 20:17:24.528 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765397829.5269349, 6d92fd7a-b7be-41bb-a2f4-d005ef181baf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:17:24 compute-0 nova_compute[189279]: 2025-12-10 20:17:24.528 189283 INFO nova.compute.manager [-] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] VM Stopped (Lifecycle Event)
Dec 10 20:17:24 compute-0 nova_compute[189279]: 2025-12-10 20:17:24.553 189283 DEBUG nova.compute.manager [None req-34b6bdd0-9956-4836-ad15-778bdac2c06a - - - - - -] [instance: 6d92fd7a-b7be-41bb-a2f4-d005ef181baf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:17:24 compute-0 nova_compute[189279]: 2025-12-10 20:17:24.563 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:27 compute-0 nova_compute[189279]: 2025-12-10 20:17:27.118 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:28 compute-0 podman[252734]: 2025-12-10 20:17:28.110791985 +0000 UTC m=+0.089132132 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, tcib_managed=true)
Dec 10 20:17:28 compute-0 podman[252735]: 2025-12-10 20:17:28.137082463 +0000 UTC m=+0.110063825 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:17:29 compute-0 nova_compute[189279]: 2025-12-10 20:17:29.568 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:29 compute-0 podman[203484]: time="2025-12-10T20:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:17:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:17:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec 10 20:17:30 compute-0 podman[252774]: 2025-12-10 20:17:30.176362557 +0000 UTC m=+0.142336244 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 10 20:17:31 compute-0 openstack_network_exporter[205632]: ERROR   20:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:17:31 compute-0 openstack_network_exporter[205632]: ERROR   20:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:17:31 compute-0 openstack_network_exporter[205632]: ERROR   20:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:17:31 compute-0 openstack_network_exporter[205632]: ERROR   20:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:17:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:17:31 compute-0 openstack_network_exporter[205632]: ERROR   20:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:17:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:17:32 compute-0 nova_compute[189279]: 2025-12-10 20:17:32.125 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:34 compute-0 nova_compute[189279]: 2025-12-10 20:17:34.573 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:35 compute-0 podman[252799]: 2025-12-10 20:17:35.12093456 +0000 UTC m=+0.100076727 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251210, io.buildah.version=1.41.4, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 10 20:17:37 compute-0 nova_compute[189279]: 2025-12-10 20:17:37.128 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:39 compute-0 nova_compute[189279]: 2025-12-10 20:17:39.578 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:42 compute-0 nova_compute[189279]: 2025-12-10 20:17:42.132 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.183 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.184 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.184 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.187 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.193 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ca7daa1b-94a2-4e08-902b-73be0ab83974', 'name': 'te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ab2dea70-7375-4e2d-beda-90f19a5ec15e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e773c65970c34c9db154c6fea65d9fa4', 'user_id': '639468767e8f48a1bd0e3dac90a0ec47', 'hostId': '1146dff38d7d135d99586b1e56a99cdcdda270d20fcfd89821e17131', 'status': 'active', 'metadata': {'metering.server_group': 'bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.194 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.194 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.194 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.194 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.195 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.196 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.196 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.196 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T20:17:42.194452) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.196 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T20:17:42.196865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.224 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.225 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.225 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.226 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.226 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.226 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.226 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.227 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.227 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.227 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.227 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.227 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T20:17:42.226411) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.228 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.228 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T20:17:42.228061) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.232 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.232 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.233 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.233 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.233 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.233 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.233 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.233 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.233 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T20:17:42.233532) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.234 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.234 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.234 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.234 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.234 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.234 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.235 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.235 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T20:17:42.234842) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.235 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.235 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.235 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.236 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.236 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.236 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.236 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.236 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T20:17:42.236223) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.236 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.237 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.237 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.237 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.237 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.237 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.237 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T20:17:42.237465) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.237 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.238 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.238 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.238 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.238 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.238 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.238 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.239 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T20:17:42.238723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.259 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/memory.usage volume: 42.80859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.260 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.260 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.260 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.260 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.260 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.261 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.261 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.261 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T20:17:42.260994) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.261 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.261 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.261 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.261 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.262 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.262 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.allocation volume: 30023680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.262 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.262 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.263 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.263 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T20:17:42.262059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.263 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.263 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.263 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T20:17:42.263525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.264 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.264 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.264 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.264 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.264 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.264 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.264 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.264 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T20:17:42.264643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.265 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.265 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.265 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.265 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.265 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.266 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.266 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.266 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.266 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T20:17:42.265645) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.266 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T20:17:42.266697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.297 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.bytes volume: 29436928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.298 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.298 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.298 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.299 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/cpu volume: 189830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.299 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.299 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T20:17:42.298934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.300 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.latency volume: 542055066 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T20:17:42.299899) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.300 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.latency volume: 53898242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.300 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.300 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.301 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.requests volume: 1055 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.301 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.301 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.302 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T20:17:42.301106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.302 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.302 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.303 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.303 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.303 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T20:17:42.302398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T20:17:42.303565) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.304 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.304 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.304 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.304 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.304 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.304 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T20:17:42.304788) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.305 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.305 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.305 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.305 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.305 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.305 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.latency volume: 3651583829 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T20:17:42.305822) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.306 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.306 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.307 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.requests volume: 318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.307 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.307 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.308 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T20:17:42.306973) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.308 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.308 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T20:17:42.308187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:17:42.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:17:44 compute-0 nova_compute[189279]: 2025-12-10 20:17:44.580 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:45 compute-0 podman[252821]: 2025-12-10 20:17:45.11492852 +0000 UTC m=+0.093015497 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 20:17:45 compute-0 podman[252822]: 2025-12-10 20:17:45.117836778 +0000 UTC m=+0.090965721 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, version=9.6, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 20:17:47 compute-0 nova_compute[189279]: 2025-12-10 20:17:47.135 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:47 compute-0 sshd-session[252866]: Received disconnect from 193.46.255.217 port 50056:11:  [preauth]
Dec 10 20:17:47 compute-0 sshd-session[252866]: Disconnected from authenticating user root 193.46.255.217 port 50056 [preauth]
Dec 10 20:17:49 compute-0 nova_compute[189279]: 2025-12-10 20:17:49.584 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:52 compute-0 nova_compute[189279]: 2025-12-10 20:17:52.138 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:53 compute-0 ovn_controller[97701]: 2025-12-10T20:17:53Z|00215|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Dec 10 20:17:54 compute-0 podman[252869]: 2025-12-10 20:17:54.105525964 +0000 UTC m=+0.079427560 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 10 20:17:54 compute-0 podman[252870]: 2025-12-10 20:17:54.11277696 +0000 UTC m=+0.083809239 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Dec 10 20:17:54 compute-0 podman[252871]: 2025-12-10 20:17:54.157537675 +0000 UTC m=+0.118462822 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, vcs-type=git, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, io.openshift.tags=base rhel9, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, managed_by=edpm_ansible, release=1214.1726694543)
Dec 10 20:17:54 compute-0 nova_compute[189279]: 2025-12-10 20:17:54.587 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:57 compute-0 nova_compute[189279]: 2025-12-10 20:17:57.141 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:58 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 10 20:17:58 compute-0 podman[252924]: 2025-12-10 20:17:58.574660082 +0000 UTC m=+0.091569788 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:17:58 compute-0 podman[252925]: 2025-12-10 20:17:58.583189782 +0000 UTC m=+0.093738986 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 20:17:59 compute-0 nova_compute[189279]: 2025-12-10 20:17:59.590 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:17:59 compute-0 podman[203484]: time="2025-12-10T20:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:17:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:17:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec 10 20:18:01 compute-0 podman[252966]: 2025-12-10 20:18:01.174537135 +0000 UTC m=+0.155091788 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:18:01 compute-0 openstack_network_exporter[205632]: ERROR   20:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:18:01 compute-0 openstack_network_exporter[205632]: ERROR   20:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:18:01 compute-0 openstack_network_exporter[205632]: ERROR   20:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:18:01 compute-0 openstack_network_exporter[205632]: ERROR   20:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:18:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:18:01 compute-0 openstack_network_exporter[205632]: ERROR   20:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:18:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:18:02 compute-0 nova_compute[189279]: 2025-12-10 20:18:02.144 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:02 compute-0 nova_compute[189279]: 2025-12-10 20:18:02.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:18:03 compute-0 nova_compute[189279]: 2025-12-10 20:18:03.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:18:04 compute-0 nova_compute[189279]: 2025-12-10 20:18:04.593 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:05 compute-0 nova_compute[189279]: 2025-12-10 20:18:05.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:18:06 compute-0 podman[252993]: 2025-12-10 20:18:06.101236366 +0000 UTC m=+0.074229740 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 10 20:18:07 compute-0 nova_compute[189279]: 2025-12-10 20:18:07.152 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:07 compute-0 nova_compute[189279]: 2025-12-10 20:18:07.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:18:07 compute-0 nova_compute[189279]: 2025-12-10 20:18:07.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:18:08 compute-0 nova_compute[189279]: 2025-12-10 20:18:08.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:18:08 compute-0 nova_compute[189279]: 2025-12-10 20:18:08.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:18:08 compute-0 nova_compute[189279]: 2025-12-10 20:18:08.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:18:09 compute-0 nova_compute[189279]: 2025-12-10 20:18:09.113 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:18:09 compute-0 nova_compute[189279]: 2025-12-10 20:18:09.114 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:18:09 compute-0 nova_compute[189279]: 2025-12-10 20:18:09.114 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:18:09 compute-0 nova_compute[189279]: 2025-12-10 20:18:09.114 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ca7daa1b-94a2-4e08-902b-73be0ab83974 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:18:09 compute-0 nova_compute[189279]: 2025-12-10 20:18:09.597 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:10 compute-0 nova_compute[189279]: 2025-12-10 20:18:10.657 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updating instance_info_cache with network_info: [{"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:18:10 compute-0 nova_compute[189279]: 2025-12-10 20:18:10.676 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:18:10 compute-0 nova_compute[189279]: 2025-12-10 20:18:10.677 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:18:10 compute-0 nova_compute[189279]: 2025-12-10 20:18:10.678 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:18:10 compute-0 nova_compute[189279]: 2025-12-10 20:18:10.678 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:18:10 compute-0 nova_compute[189279]: 2025-12-10 20:18:10.679 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:18:10 compute-0 nova_compute[189279]: 2025-12-10 20:18:10.707 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:18:10 compute-0 nova_compute[189279]: 2025-12-10 20:18:10.707 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:18:10 compute-0 nova_compute[189279]: 2025-12-10 20:18:10.708 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:18:10 compute-0 nova_compute[189279]: 2025-12-10 20:18:10.708 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:18:10 compute-0 nova_compute[189279]: 2025-12-10 20:18:10.787 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:18:10 compute-0 nova_compute[189279]: 2025-12-10 20:18:10.867 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:18:10 compute-0 nova_compute[189279]: 2025-12-10 20:18:10.869 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:18:10 compute-0 nova_compute[189279]: 2025-12-10 20:18:10.948 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:18:11 compute-0 nova_compute[189279]: 2025-12-10 20:18:11.263 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:18:11 compute-0 nova_compute[189279]: 2025-12-10 20:18:11.265 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5180MB free_disk=72.26753616333008GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:18:11 compute-0 nova_compute[189279]: 2025-12-10 20:18:11.266 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:18:11 compute-0 nova_compute[189279]: 2025-12-10 20:18:11.267 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:18:11 compute-0 nova_compute[189279]: 2025-12-10 20:18:11.388 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:18:11 compute-0 nova_compute[189279]: 2025-12-10 20:18:11.389 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:18:11 compute-0 nova_compute[189279]: 2025-12-10 20:18:11.389 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:18:11 compute-0 nova_compute[189279]: 2025-12-10 20:18:11.477 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:18:11 compute-0 nova_compute[189279]: 2025-12-10 20:18:11.502 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:18:11 compute-0 nova_compute[189279]: 2025-12-10 20:18:11.523 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:18:11 compute-0 nova_compute[189279]: 2025-12-10 20:18:11.524 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:18:12 compute-0 nova_compute[189279]: 2025-12-10 20:18:12.150 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:14 compute-0 nova_compute[189279]: 2025-12-10 20:18:14.601 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:15 compute-0 nova_compute[189279]: 2025-12-10 20:18:15.334 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:18:16 compute-0 podman[253019]: 2025-12-10 20:18:16.118259367 +0000 UTC m=+0.096496641 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 20:18:16 compute-0 podman[253020]: 2025-12-10 20:18:16.13735388 +0000 UTC m=+0.103002465 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.33.7, version=9.6, io.openshift.expose-services=)
Dec 10 20:18:17 compute-0 nova_compute[189279]: 2025-12-10 20:18:17.154 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:19 compute-0 nova_compute[189279]: 2025-12-10 20:18:19.605 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:22 compute-0 nova_compute[189279]: 2025-12-10 20:18:22.156 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:18:23.401 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:18:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:18:23.401 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:18:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:18:23.402 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:18:24 compute-0 nova_compute[189279]: 2025-12-10 20:18:24.609 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:25 compute-0 podman[253066]: 2025-12-10 20:18:25.134139952 +0000 UTC m=+0.100297313 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Dec 10 20:18:25 compute-0 podman[253067]: 2025-12-10 20:18:25.161256632 +0000 UTC m=+0.109510240 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:18:25 compute-0 podman[253068]: 2025-12-10 20:18:25.187311224 +0000 UTC m=+0.126757075 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, config_id=edpm, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, distribution-scope=public, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible)
Dec 10 20:18:27 compute-0 nova_compute[189279]: 2025-12-10 20:18:27.159 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:29 compute-0 podman[253120]: 2025-12-10 20:18:29.14993538 +0000 UTC m=+0.124378691 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:18:29 compute-0 podman[253121]: 2025-12-10 20:18:29.15478233 +0000 UTC m=+0.110376143 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:18:29 compute-0 nova_compute[189279]: 2025-12-10 20:18:29.615 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:29 compute-0 podman[203484]: time="2025-12-10T20:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:18:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:18:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Dec 10 20:18:31 compute-0 openstack_network_exporter[205632]: ERROR   20:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:18:31 compute-0 openstack_network_exporter[205632]: ERROR   20:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:18:31 compute-0 openstack_network_exporter[205632]: ERROR   20:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:18:31 compute-0 openstack_network_exporter[205632]: ERROR   20:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:18:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:18:31 compute-0 openstack_network_exporter[205632]: ERROR   20:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:18:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:18:32 compute-0 podman[253162]: 2025-12-10 20:18:32.144556464 +0000 UTC m=+0.122338866 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 10 20:18:32 compute-0 nova_compute[189279]: 2025-12-10 20:18:32.161 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:34 compute-0 nova_compute[189279]: 2025-12-10 20:18:34.619 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:37 compute-0 podman[253189]: 2025-12-10 20:18:37.099535788 +0000 UTC m=+0.073610734 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:18:37 compute-0 nova_compute[189279]: 2025-12-10 20:18:37.168 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:39 compute-0 nova_compute[189279]: 2025-12-10 20:18:39.624 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:42 compute-0 nova_compute[189279]: 2025-12-10 20:18:42.169 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:44 compute-0 nova_compute[189279]: 2025-12-10 20:18:44.627 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:47 compute-0 podman[253210]: 2025-12-10 20:18:47.124195692 +0000 UTC m=+0.107160716 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, version=9.6, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible)
Dec 10 20:18:47 compute-0 podman[253209]: 2025-12-10 20:18:47.126824353 +0000 UTC m=+0.109205422 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 20:18:47 compute-0 nova_compute[189279]: 2025-12-10 20:18:47.170 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:49 compute-0 nova_compute[189279]: 2025-12-10 20:18:49.632 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:52 compute-0 nova_compute[189279]: 2025-12-10 20:18:52.171 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:54 compute-0 nova_compute[189279]: 2025-12-10 20:18:54.635 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:56 compute-0 podman[253256]: 2025-12-10 20:18:56.128391713 +0000 UTC m=+0.093679894 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 10 20:18:56 compute-0 podman[253255]: 2025-12-10 20:18:56.15125439 +0000 UTC m=+0.114425223 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 10 20:18:56 compute-0 podman[253257]: 2025-12-10 20:18:56.168877614 +0000 UTC m=+0.128866572 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vcs-type=git, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release=1214.1726694543, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.buildah.version=1.29.0)
Dec 10 20:18:57 compute-0 nova_compute[189279]: 2025-12-10 20:18:57.174 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.125 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "cc1e9e66-56af-4162-a89f-c97758ee1a64" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.125 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.142 189283 DEBUG nova.compute.manager [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.236 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.240 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.253 189283 DEBUG nova.virt.hardware [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.254 189283 INFO nova.compute.claims [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Claim successful on node compute-0.ctlplane.example.com
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.377 189283 DEBUG nova.compute.provider_tree [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.391 189283 DEBUG nova.scheduler.client.report [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.408 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.409 189283 DEBUG nova.compute.manager [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.450 189283 DEBUG nova.compute.manager [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.451 189283 DEBUG nova.network.neutron [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.473 189283 INFO nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.490 189283 DEBUG nova.compute.manager [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.587 189283 DEBUG nova.compute.manager [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.589 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.589 189283 INFO nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Creating image(s)
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.590 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "/var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.590 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "/var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.591 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "/var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.605 189283 DEBUG oslo_concurrency.processutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.656 189283 DEBUG nova.policy [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '639468767e8f48a1bd0e3dac90a0ec47', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e773c65970c34c9db154c6fea65d9fa4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.667 189283 DEBUG oslo_concurrency.processutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.668 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "53f56b563801b5ea0f834b33920c5e6aa39aeede" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.669 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "53f56b563801b5ea0f834b33920c5e6aa39aeede" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.680 189283 DEBUG oslo_concurrency.processutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.739 189283 DEBUG oslo_concurrency.processutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:18:58 compute-0 nova_compute[189279]: 2025-12-10 20:18:58.741 189283 DEBUG oslo_concurrency.processutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede,backing_fmt=raw /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.126 189283 DEBUG oslo_concurrency.processutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede,backing_fmt=raw /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk 1073741824" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.128 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "53f56b563801b5ea0f834b33920c5e6aa39aeede" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.128 189283 DEBUG oslo_concurrency.processutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.207 189283 DEBUG oslo_concurrency.processutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.209 189283 DEBUG nova.virt.disk.api [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Checking if we can resize image /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.211 189283 DEBUG oslo_concurrency.processutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.289 189283 DEBUG oslo_concurrency.processutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.292 189283 DEBUG nova.virt.disk.api [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Cannot resize image /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.294 189283 DEBUG nova.objects.instance [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lazy-loading 'migration_context' on Instance uuid cc1e9e66-56af-4162-a89f-c97758ee1a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.316 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.318 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Ensure instance console log exists: /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.319 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.320 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.320 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.430 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:18:59.434 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:18:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:18:59.435 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 20:18:59 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:18:59.441 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.519 189283 DEBUG nova.network.neutron [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Successfully created port: 191db221-f5ea-4b4e-aa90-70dca09235b1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 10 20:18:59 compute-0 nova_compute[189279]: 2025-12-10 20:18:59.638 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:18:59 compute-0 podman[203484]: time="2025-12-10T20:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:18:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:18:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec 10 20:19:00 compute-0 podman[253326]: 2025-12-10 20:19:00.084045142 +0000 UTC m=+0.065410913 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 20:19:00 compute-0 podman[253325]: 2025-12-10 20:19:00.121282245 +0000 UTC m=+0.105375369 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 10 20:19:00 compute-0 nova_compute[189279]: 2025-12-10 20:19:00.428 189283 DEBUG nova.network.neutron [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Successfully updated port: 191db221-f5ea-4b4e-aa90-70dca09235b1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 10 20:19:00 compute-0 nova_compute[189279]: 2025-12-10 20:19:00.451 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:19:00 compute-0 nova_compute[189279]: 2025-12-10 20:19:00.452 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquired lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:19:00 compute-0 nova_compute[189279]: 2025-12-10 20:19:00.452 189283 DEBUG nova.network.neutron [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 10 20:19:00 compute-0 nova_compute[189279]: 2025-12-10 20:19:00.515 189283 DEBUG nova.compute.manager [req-9e509188-ef64-4ad9-b1e0-ae49735b814e req-07550e49-84bc-4a70-b8ef-e12d682df83a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Received event network-changed-191db221-f5ea-4b4e-aa90-70dca09235b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:19:00 compute-0 nova_compute[189279]: 2025-12-10 20:19:00.516 189283 DEBUG nova.compute.manager [req-9e509188-ef64-4ad9-b1e0-ae49735b814e req-07550e49-84bc-4a70-b8ef-e12d682df83a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Refreshing instance network info cache due to event network-changed-191db221-f5ea-4b4e-aa90-70dca09235b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 10 20:19:00 compute-0 nova_compute[189279]: 2025-12-10 20:19:00.516 189283 DEBUG oslo_concurrency.lockutils [req-9e509188-ef64-4ad9-b1e0-ae49735b814e req-07550e49-84bc-4a70-b8ef-e12d682df83a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:19:00 compute-0 nova_compute[189279]: 2025-12-10 20:19:00.579 189283 DEBUG nova.network.neutron [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.331 189283 DEBUG nova.network.neutron [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Updating instance_info_cache with network_info: [{"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.356 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Releasing lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.357 189283 DEBUG nova.compute.manager [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Instance network_info: |[{"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.359 189283 DEBUG oslo_concurrency.lockutils [req-9e509188-ef64-4ad9-b1e0-ae49735b814e req-07550e49-84bc-4a70-b8ef-e12d682df83a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquired lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.360 189283 DEBUG nova.network.neutron [req-9e509188-ef64-4ad9-b1e0-ae49735b814e req-07550e49-84bc-4a70-b8ef-e12d682df83a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Refreshing network info cache for port 191db221-f5ea-4b4e-aa90-70dca09235b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.365 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Start _get_guest_xml network_info=[{"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:14:12Z,direct_url=<?>,disk_format='qcow2',id=ab2dea70-7375-4e2d-beda-90f19a5ec15e,min_disk=0,min_ram=0,name='tempest-scenario-img--877921737',owner='e773c65970c34c9db154c6fea65d9fa4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:14:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'image_id': 'ab2dea70-7375-4e2d-beda-90f19a5ec15e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.379 189283 WARNING nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.388 189283 DEBUG nova.virt.libvirt.host [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.389 189283 DEBUG nova.virt.libvirt.host [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.399 189283 DEBUG nova.virt.libvirt.host [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.400 189283 DEBUG nova.virt.libvirt.host [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.401 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.401 189283 DEBUG nova.virt.hardware [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-10T20:11:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-10T20:14:12Z,direct_url=<?>,disk_format='qcow2',id=ab2dea70-7375-4e2d-beda-90f19a5ec15e,min_disk=0,min_ram=0,name='tempest-scenario-img--877921737',owner='e773c65970c34c9db154c6fea65d9fa4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-10T20:14:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.402 189283 DEBUG nova.virt.hardware [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.403 189283 DEBUG nova.virt.hardware [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.403 189283 DEBUG nova.virt.hardware [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.404 189283 DEBUG nova.virt.hardware [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.404 189283 DEBUG nova.virt.hardware [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.405 189283 DEBUG nova.virt.hardware [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.406 189283 DEBUG nova.virt.hardware [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.406 189283 DEBUG nova.virt.hardware [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.407 189283 DEBUG nova.virt.hardware [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.407 189283 DEBUG nova.virt.hardware [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.411 189283 DEBUG nova.virt.libvirt.vif [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:18:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq',id=16,image_ref='ab2dea70-7375-4e2d-beda-90f19a5ec15e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e773c65970c34c9db154c6fea65d9fa4',ramdisk_id='',reservation_id='r-1svzys4w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ab2dea70-7375-4e2d-beda-90f19a5ec15e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1355872434',owner_user_name='tempest-PrometheusGabbiTest-1355872434-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:18:58Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='639468767e8f48a1bd0e3dac90a0ec47',uuid=cc1e9e66-56af-4162-a89f-c97758ee1a64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.412 189283 DEBUG nova.network.os_vif_util [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Converting VIF {"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.413 189283 DEBUG nova.network.os_vif_util [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:91:03,bridge_name='br-int',has_traffic_filtering=True,id=191db221-f5ea-4b4e-aa90-70dca09235b1,network=Network(5861e367-6dd6-4128-97c5-6a0449548387),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap191db221-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.414 189283 DEBUG nova.objects.instance [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lazy-loading 'pci_devices' on Instance uuid cc1e9e66-56af-4162-a89f-c97758ee1a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:19:01 compute-0 openstack_network_exporter[205632]: ERROR   20:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:19:01 compute-0 openstack_network_exporter[205632]: ERROR   20:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:19:01 compute-0 openstack_network_exporter[205632]: ERROR   20:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:19:01 compute-0 openstack_network_exporter[205632]: ERROR   20:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:19:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:19:01 compute-0 openstack_network_exporter[205632]: ERROR   20:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:19:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.434 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] End _get_guest_xml xml=<domain type="kvm">
Dec 10 20:19:01 compute-0 nova_compute[189279]:   <uuid>cc1e9e66-56af-4162-a89f-c97758ee1a64</uuid>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   <name>instance-00000010</name>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   <memory>131072</memory>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   <vcpu>1</vcpu>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   <metadata>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <nova:name>te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq</nova:name>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <nova:creationTime>2025-12-10 20:19:01</nova:creationTime>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <nova:flavor name="m1.nano">
Dec 10 20:19:01 compute-0 nova_compute[189279]:         <nova:memory>128</nova:memory>
Dec 10 20:19:01 compute-0 nova_compute[189279]:         <nova:disk>1</nova:disk>
Dec 10 20:19:01 compute-0 nova_compute[189279]:         <nova:swap>0</nova:swap>
Dec 10 20:19:01 compute-0 nova_compute[189279]:         <nova:ephemeral>0</nova:ephemeral>
Dec 10 20:19:01 compute-0 nova_compute[189279]:         <nova:vcpus>1</nova:vcpus>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       </nova:flavor>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <nova:owner>
Dec 10 20:19:01 compute-0 nova_compute[189279]:         <nova:user uuid="639468767e8f48a1bd0e3dac90a0ec47">tempest-PrometheusGabbiTest-1355872434-project-member</nova:user>
Dec 10 20:19:01 compute-0 nova_compute[189279]:         <nova:project uuid="e773c65970c34c9db154c6fea65d9fa4">tempest-PrometheusGabbiTest-1355872434</nova:project>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       </nova:owner>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <nova:root type="image" uuid="ab2dea70-7375-4e2d-beda-90f19a5ec15e"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <nova:ports>
Dec 10 20:19:01 compute-0 nova_compute[189279]:         <nova:port uuid="191db221-f5ea-4b4e-aa90-70dca09235b1">
Dec 10 20:19:01 compute-0 nova_compute[189279]:           <nova:ip type="fixed" address="10.100.1.212" ipVersion="4"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:         </nova:port>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       </nova:ports>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     </nova:instance>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   </metadata>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   <sysinfo type="smbios">
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <system>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <entry name="manufacturer">RDO</entry>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <entry name="product">OpenStack Compute</entry>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <entry name="serial">cc1e9e66-56af-4162-a89f-c97758ee1a64</entry>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <entry name="uuid">cc1e9e66-56af-4162-a89f-c97758ee1a64</entry>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <entry name="family">Virtual Machine</entry>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     </system>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   </sysinfo>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   <os>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <boot dev="hd"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <smbios mode="sysinfo"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   </os>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   <features>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <acpi/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <apic/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <vmcoreinfo/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   </features>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   <clock offset="utc">
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <timer name="pit" tickpolicy="delay"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <timer name="hpet" present="no"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   </clock>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   <cpu mode="host-model" match="exact">
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <topology sockets="1" cores="1" threads="1"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   </cpu>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   <devices>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <disk type="file" device="disk">
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <target dev="vda" bus="virtio"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <disk type="file" device="cdrom">
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <driver name="qemu" type="raw" cache="none"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <source file="/var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.config"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <target dev="sda" bus="sata"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     </disk>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <interface type="ethernet">
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <mac address="fa:16:3e:fb:91:03"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <driver name="vhost" rx_queue_size="512"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <mtu size="1442"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <target dev="tap191db221-f5"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     </interface>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <serial type="pty">
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <log file="/var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/console.log" append="off"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     </serial>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <video>
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <model type="virtio"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     </video>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <input type="tablet" bus="usb"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <rng model="virtio">
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <backend model="random">/dev/urandom</backend>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     </rng>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="pci" model="pcie-root-port"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <controller type="usb" index="0"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     <memballoon model="virtio">
Dec 10 20:19:01 compute-0 nova_compute[189279]:       <stats period="10"/>
Dec 10 20:19:01 compute-0 nova_compute[189279]:     </memballoon>
Dec 10 20:19:01 compute-0 nova_compute[189279]:   </devices>
Dec 10 20:19:01 compute-0 nova_compute[189279]: </domain>
Dec 10 20:19:01 compute-0 nova_compute[189279]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.435 189283 DEBUG nova.compute.manager [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Preparing to wait for external event network-vif-plugged-191db221-f5ea-4b4e-aa90-70dca09235b1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.436 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.436 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.436 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.437 189283 DEBUG nova.virt.libvirt.vif [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-10T20:18:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq',id=16,image_ref='ab2dea70-7375-4e2d-beda-90f19a5ec15e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e773c65970c34c9db154c6fea65d9fa4',ramdisk_id='',reservation_id='r-1svzys4w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ab2dea70-7375-4e2d-beda-90f19a5ec15e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1355872434',owner_user_name='tempest-PrometheusGabbiTest-1355872434-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-10T20:18:58Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='639468767e8f48a1bd0e3dac90a0ec47',uuid=cc1e9e66-56af-4162-a89f-c97758ee1a64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.437 189283 DEBUG nova.network.os_vif_util [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Converting VIF {"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.438 189283 DEBUG nova.network.os_vif_util [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:91:03,bridge_name='br-int',has_traffic_filtering=True,id=191db221-f5ea-4b4e-aa90-70dca09235b1,network=Network(5861e367-6dd6-4128-97c5-6a0449548387),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap191db221-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.439 189283 DEBUG os_vif [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:91:03,bridge_name='br-int',has_traffic_filtering=True,id=191db221-f5ea-4b4e-aa90-70dca09235b1,network=Network(5861e367-6dd6-4128-97c5-6a0449548387),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap191db221-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.439 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.440 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.440 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.444 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.445 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap191db221-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.446 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap191db221-f5, col_values=(('external_ids', {'iface-id': '191db221-f5ea-4b4e-aa90-70dca09235b1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:91:03', 'vm-uuid': 'cc1e9e66-56af-4162-a89f-c97758ee1a64'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.448 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:01 compute-0 NetworkManager[56238]: <info>  [1765397941.4502] manager: (tap191db221-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.451 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.457 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.458 189283 INFO os_vif [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:91:03,bridge_name='br-int',has_traffic_filtering=True,id=191db221-f5ea-4b4e-aa90-70dca09235b1,network=Network(5861e367-6dd6-4128-97c5-6a0449548387),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap191db221-f5')
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.504 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.505 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.506 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] No VIF found with MAC fa:16:3e:fb:91:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.506 189283 INFO nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Using config drive
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.810 189283 INFO nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Creating config drive at /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.config
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.818 189283 DEBUG oslo_concurrency.processutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt0i47f5t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:19:01 compute-0 nova_compute[189279]: 2025-12-10 20:19:01.965 189283 DEBUG oslo_concurrency.processutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt0i47f5t" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:19:02 compute-0 kernel: tap191db221-f5: entered promiscuous mode
Dec 10 20:19:02 compute-0 NetworkManager[56238]: <info>  [1765397942.0656] manager: (tap191db221-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Dec 10 20:19:02 compute-0 ovn_controller[97701]: 2025-12-10T20:19:02Z|00216|binding|INFO|Claiming lport 191db221-f5ea-4b4e-aa90-70dca09235b1 for this chassis.
Dec 10 20:19:02 compute-0 ovn_controller[97701]: 2025-12-10T20:19:02Z|00217|binding|INFO|191db221-f5ea-4b4e-aa90-70dca09235b1: Claiming fa:16:3e:fb:91:03 10.100.1.212
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.072 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:02 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:02.081 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:91:03 10.100.1.212'], port_security=['fa:16:3e:fb:91:03 10.100.1.212'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.212/16', 'neutron:device_id': 'cc1e9e66-56af-4162-a89f-c97758ee1a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5861e367-6dd6-4128-97c5-6a0449548387', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e773c65970c34c9db154c6fea65d9fa4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '423352dd-9d4c-474d-a8f0-1199c6062876', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=742d4e89-613f-49d1-83dc-36d4a9402367, chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=191db221-f5ea-4b4e-aa90-70dca09235b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:19:02 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:02.082 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 191db221-f5ea-4b4e-aa90-70dca09235b1 in datapath 5861e367-6dd6-4128-97c5-6a0449548387 bound to our chassis
Dec 10 20:19:02 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:02.085 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5861e367-6dd6-4128-97c5-6a0449548387
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.097 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:02 compute-0 ovn_controller[97701]: 2025-12-10T20:19:02Z|00218|binding|INFO|Setting lport 191db221-f5ea-4b4e-aa90-70dca09235b1 up in Southbound
Dec 10 20:19:02 compute-0 ovn_controller[97701]: 2025-12-10T20:19:02Z|00219|binding|INFO|Setting lport 191db221-f5ea-4b4e-aa90-70dca09235b1 ovn-installed in OVS
Dec 10 20:19:02 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:02.112 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[26034be2-eabf-4dcb-aabc-2b47791f0475]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:19:02 compute-0 systemd-machined[155642]: New machine qemu-17-instance-00000010.
Dec 10 20:19:02 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000010.
Dec 10 20:19:02 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:02.147 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[6186ace2-f025-4b2e-a829-3975a3dd4510]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:19:02 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:02.153 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae62661-ca2d-4508-930a-68366c6d771f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:19:02 compute-0 systemd-udevd[253391]: Network interface NamePolicy= disabled on kernel command line.
Dec 10 20:19:02 compute-0 NetworkManager[56238]: <info>  [1765397942.1716] device (tap191db221-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 10 20:19:02 compute-0 NetworkManager[56238]: <info>  [1765397942.1723] device (tap191db221-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.177 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:02 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:02.195 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3c27c1-7207-45bd-bbe2-a1cf18e066bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:19:02 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:02.215 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[a551b1b6-ebe0-4052-88fe-c31b8845b2c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5861e367-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:88:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499821, 'reachable_time': 34183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253407, 'error': None, 'target': 'ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:19:02 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:02.236 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[f38b0d99-3c83-4d18-a987-2b91faa7d745]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5861e367-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499838, 'tstamp': 499838}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253413, 'error': None, 'target': 'ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap5861e367-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499844, 'tstamp': 499844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253413, 'error': None, 'target': 'ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:19:02 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:02.239 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5861e367-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.241 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:02 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:02.243 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5861e367-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:19:02 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:02.243 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:19:02 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:02.244 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5861e367-60, col_values=(('external_ids', {'iface-id': 'eedd7beb-1e55-4b8d-a932-7d0592d2e98a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:19:02 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:02.244 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:19:02 compute-0 podman[253397]: 2025-12-10 20:19:02.302767509 +0000 UTC m=+0.111548885 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.621 189283 DEBUG nova.compute.manager [req-b00b926d-0862-4059-9859-c4acfab37645 req-d3d11378-af2e-47e5-b52a-9a33d3f12402 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Received event network-vif-plugged-191db221-f5ea-4b4e-aa90-70dca09235b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.622 189283 DEBUG oslo_concurrency.lockutils [req-b00b926d-0862-4059-9859-c4acfab37645 req-d3d11378-af2e-47e5-b52a-9a33d3f12402 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.623 189283 DEBUG oslo_concurrency.lockutils [req-b00b926d-0862-4059-9859-c4acfab37645 req-d3d11378-af2e-47e5-b52a-9a33d3f12402 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.624 189283 DEBUG oslo_concurrency.lockutils [req-b00b926d-0862-4059-9859-c4acfab37645 req-d3d11378-af2e-47e5-b52a-9a33d3f12402 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.624 189283 DEBUG nova.compute.manager [req-b00b926d-0862-4059-9859-c4acfab37645 req-d3d11378-af2e-47e5-b52a-9a33d3f12402 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Processing event network-vif-plugged-191db221-f5ea-4b4e-aa90-70dca09235b1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.850 189283 DEBUG nova.network.neutron [req-9e509188-ef64-4ad9-b1e0-ae49735b814e req-07550e49-84bc-4a70-b8ef-e12d682df83a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Updated VIF entry in instance network info cache for port 191db221-f5ea-4b4e-aa90-70dca09235b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.851 189283 DEBUG nova.network.neutron [req-9e509188-ef64-4ad9-b1e0-ae49735b814e req-07550e49-84bc-4a70-b8ef-e12d682df83a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Updating instance_info_cache with network_info: [{"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.866 189283 DEBUG oslo_concurrency.lockutils [req-9e509188-ef64-4ad9-b1e0-ae49735b814e req-07550e49-84bc-4a70-b8ef-e12d682df83a 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Releasing lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.984 189283 DEBUG nova.compute.manager [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.985 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397942.985329, cc1e9e66-56af-4162-a89f-c97758ee1a64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.986 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] VM Started (Lifecycle Event)
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.989 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.995 189283 INFO nova.virt.libvirt.driver [-] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Instance spawned successfully.
Dec 10 20:19:02 compute-0 nova_compute[189279]: 2025-12-10 20:19:02.995 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.007 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.019 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.023 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.023 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.024 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.024 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.025 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.025 189283 DEBUG nova.virt.libvirt.driver [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.037 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.037 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397942.9854827, cc1e9e66-56af-4162-a89f-c97758ee1a64 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.038 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] VM Paused (Lifecycle Event)
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.058 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.064 189283 DEBUG nova.virt.driver [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] Emitting event <LifecycleEvent: 1765397942.9895418, cc1e9e66-56af-4162-a89f-c97758ee1a64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.064 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] VM Resumed (Lifecycle Event)
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.080 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.087 189283 INFO nova.compute.manager [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Took 4.50 seconds to spawn the instance on the hypervisor.
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.088 189283 DEBUG nova.compute.manager [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.090 189283 DEBUG nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.106 189283 INFO nova.compute.manager [None req-d5fef9f0-8418-4d24-b476-f3d5b75fdf66 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.154 189283 INFO nova.compute.manager [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Took 4.96 seconds to build instance.
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.179 189283 DEBUG oslo_concurrency.lockutils [None req-67cd8102-8560-428a-8866-a9eb15523616 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:19:03 compute-0 nova_compute[189279]: 2025-12-10 20:19:03.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:19:04 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 10 20:19:04 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 10 20:19:06 compute-0 nova_compute[189279]: 2025-12-10 20:19:06.253 189283 DEBUG nova.compute.manager [req-6ec9036c-3f58-48b4-bcb5-42245be838a7 req-dbe71411-795c-434f-ad34-a070e571a398 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Received event network-vif-plugged-191db221-f5ea-4b4e-aa90-70dca09235b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:19:06 compute-0 nova_compute[189279]: 2025-12-10 20:19:06.255 189283 DEBUG oslo_concurrency.lockutils [req-6ec9036c-3f58-48b4-bcb5-42245be838a7 req-dbe71411-795c-434f-ad34-a070e571a398 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:19:06 compute-0 nova_compute[189279]: 2025-12-10 20:19:06.256 189283 DEBUG oslo_concurrency.lockutils [req-6ec9036c-3f58-48b4-bcb5-42245be838a7 req-dbe71411-795c-434f-ad34-a070e571a398 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:19:06 compute-0 nova_compute[189279]: 2025-12-10 20:19:06.257 189283 DEBUG oslo_concurrency.lockutils [req-6ec9036c-3f58-48b4-bcb5-42245be838a7 req-dbe71411-795c-434f-ad34-a070e571a398 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:19:06 compute-0 nova_compute[189279]: 2025-12-10 20:19:06.257 189283 DEBUG nova.compute.manager [req-6ec9036c-3f58-48b4-bcb5-42245be838a7 req-dbe71411-795c-434f-ad34-a070e571a398 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] No waiting events found dispatching network-vif-plugged-191db221-f5ea-4b4e-aa90-70dca09235b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:19:06 compute-0 nova_compute[189279]: 2025-12-10 20:19:06.258 189283 WARNING nova.compute.manager [req-6ec9036c-3f58-48b4-bcb5-42245be838a7 req-dbe71411-795c-434f-ad34-a070e571a398 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Received unexpected event network-vif-plugged-191db221-f5ea-4b4e-aa90-70dca09235b1 for instance with vm_state active and task_state None.
Dec 10 20:19:06 compute-0 nova_compute[189279]: 2025-12-10 20:19:06.449 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:07 compute-0 nova_compute[189279]: 2025-12-10 20:19:07.181 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:07 compute-0 nova_compute[189279]: 2025-12-10 20:19:07.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:19:07 compute-0 nova_compute[189279]: 2025-12-10 20:19:07.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:19:07 compute-0 nova_compute[189279]: 2025-12-10 20:19:07.486 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:19:08 compute-0 podman[253454]: 2025-12-10 20:19:08.135157764 +0000 UTC m=+0.113700524 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 10 20:19:09 compute-0 nova_compute[189279]: 2025-12-10 20:19:09.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:19:09 compute-0 nova_compute[189279]: 2025-12-10 20:19:09.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:19:09 compute-0 nova_compute[189279]: 2025-12-10 20:19:09.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:19:10 compute-0 nova_compute[189279]: 2025-12-10 20:19:10.188 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:19:10 compute-0 nova_compute[189279]: 2025-12-10 20:19:10.189 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:19:10 compute-0 nova_compute[189279]: 2025-12-10 20:19:10.189 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:19:10 compute-0 nova_compute[189279]: 2025-12-10 20:19:10.190 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ca7daa1b-94a2-4e08-902b-73be0ab83974 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.410 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updating instance_info_cache with network_info: [{"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.421 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.422 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.422 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.422 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.441 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.442 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.442 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.442 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.452 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.538 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.622 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.624 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.686 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.697 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.760 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.762 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:19:11 compute-0 nova_compute[189279]: 2025-12-10 20:19:11.825 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:19:12 compute-0 nova_compute[189279]: 2025-12-10 20:19:12.127 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:19:12 compute-0 nova_compute[189279]: 2025-12-10 20:19:12.128 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5049MB free_disk=72.26677703857422GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:19:12 compute-0 nova_compute[189279]: 2025-12-10 20:19:12.128 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:19:12 compute-0 nova_compute[189279]: 2025-12-10 20:19:12.128 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:19:12 compute-0 nova_compute[189279]: 2025-12-10 20:19:12.187 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:12 compute-0 nova_compute[189279]: 2025-12-10 20:19:12.222 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:19:12 compute-0 nova_compute[189279]: 2025-12-10 20:19:12.223 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance cc1e9e66-56af-4162-a89f-c97758ee1a64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:19:12 compute-0 nova_compute[189279]: 2025-12-10 20:19:12.223 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:19:12 compute-0 nova_compute[189279]: 2025-12-10 20:19:12.223 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:19:12 compute-0 nova_compute[189279]: 2025-12-10 20:19:12.275 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:19:12 compute-0 nova_compute[189279]: 2025-12-10 20:19:12.288 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:19:12 compute-0 nova_compute[189279]: 2025-12-10 20:19:12.311 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:19:12 compute-0 nova_compute[189279]: 2025-12-10 20:19:12.312 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:19:13 compute-0 nova_compute[189279]: 2025-12-10 20:19:13.377 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:19:15 compute-0 nova_compute[189279]: 2025-12-10 20:19:15.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:19:15 compute-0 nova_compute[189279]: 2025-12-10 20:19:15.507 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:19:16 compute-0 nova_compute[189279]: 2025-12-10 20:19:16.454 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:17 compute-0 nova_compute[189279]: 2025-12-10 20:19:17.185 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:18 compute-0 podman[253486]: 2025-12-10 20:19:18.140491899 +0000 UTC m=+0.118838852 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:19:18 compute-0 podman[253487]: 2025-12-10 20:19:18.144747263 +0000 UTC m=+0.115818680 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 10 20:19:21 compute-0 nova_compute[189279]: 2025-12-10 20:19:21.457 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:22 compute-0 nova_compute[189279]: 2025-12-10 20:19:22.187 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:23.403 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:19:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:23.403 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:19:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:19:23.404 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:19:26 compute-0 nova_compute[189279]: 2025-12-10 20:19:26.460 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:27 compute-0 podman[253529]: 2025-12-10 20:19:27.091713553 +0000 UTC m=+0.068045644 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 10 20:19:27 compute-0 podman[253530]: 2025-12-10 20:19:27.116701906 +0000 UTC m=+0.088662459 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:19:27 compute-0 podman[253531]: 2025-12-10 20:19:27.128644137 +0000 UTC m=+0.098378830 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, config_id=edpm, build-date=2024-09-18T21:23:30, vcs-type=git, managed_by=edpm_ansible, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.component=ubi9-container, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 10 20:19:27 compute-0 nova_compute[189279]: 2025-12-10 20:19:27.188 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:29 compute-0 podman[203484]: time="2025-12-10T20:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:19:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:19:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec 10 20:19:31 compute-0 podman[253582]: 2025-12-10 20:19:31.11751445 +0000 UTC m=+0.089864191 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:19:31 compute-0 podman[253581]: 2025-12-10 20:19:31.135188847 +0000 UTC m=+0.111297130 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Dec 10 20:19:31 compute-0 openstack_network_exporter[205632]: ERROR   20:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:19:31 compute-0 openstack_network_exporter[205632]: ERROR   20:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:19:31 compute-0 openstack_network_exporter[205632]: ERROR   20:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:19:31 compute-0 openstack_network_exporter[205632]: ERROR   20:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:19:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:19:31 compute-0 openstack_network_exporter[205632]: ERROR   20:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:19:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:19:31 compute-0 nova_compute[189279]: 2025-12-10 20:19:31.463 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:32 compute-0 ovn_controller[97701]: 2025-12-10T20:19:32Z|00220|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 10 20:19:32 compute-0 nova_compute[189279]: 2025-12-10 20:19:32.191 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:33 compute-0 podman[253626]: 2025-12-10 20:19:33.142404846 +0000 UTC m=+0.122080439 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Dec 10 20:19:36 compute-0 nova_compute[189279]: 2025-12-10 20:19:36.467 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:36 compute-0 ovn_controller[97701]: 2025-12-10T20:19:36Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:91:03 10.100.1.212
Dec 10 20:19:36 compute-0 ovn_controller[97701]: 2025-12-10T20:19:36Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:91:03 10.100.1.212
Dec 10 20:19:37 compute-0 nova_compute[189279]: 2025-12-10 20:19:37.195 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:39 compute-0 podman[253658]: 2025-12-10 20:19:39.116494287 +0000 UTC m=+0.090372595 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 10 20:19:41 compute-0 nova_compute[189279]: 2025-12-10 20:19:41.474 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:41 compute-0 sshd-session[253677]: Invalid user solv from 80.94.92.184 port 36756
Dec 10 20:19:41 compute-0 sshd-session[253677]: Connection closed by invalid user solv 80.94.92.184 port 36756 [preauth]
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.187 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.192 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:19:42 compute-0 nova_compute[189279]: 2025-12-10 20:19:42.199 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.209 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ca7daa1b-94a2-4e08-902b-73be0ab83974', 'name': 'te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ab2dea70-7375-4e2d-beda-90f19a5ec15e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e773c65970c34c9db154c6fea65d9fa4', 'user_id': '639468767e8f48a1bd0e3dac90a0ec47', 'hostId': '1146dff38d7d135d99586b1e56a99cdcdda270d20fcfd89821e17131', 'status': 'active', 'metadata': {'metering.server_group': 'bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.214 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance cc1e9e66-56af-4162-a89f-c97758ee1a64 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 10 20:19:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:42.216 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/cc1e9e66-56af-4162-a89f-c97758ee1a64 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a6b7809c80638f6e016296d2f243706fded356213cefb5a3f70c31b120afa2c9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.330 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Wed, 10 Dec 2025 20:19:42 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-b170dbc3-2530-4029-8727-9516eca7ca1e x-openstack-request-id: req-b170dbc3-2530-4029-8727-9516eca7ca1e _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.330 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "cc1e9e66-56af-4162-a89f-c97758ee1a64", "name": "te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq", "status": "ACTIVE", "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "user_id": "639468767e8f48a1bd0e3dac90a0ec47", "metadata": {"metering.server_group": "bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda"}, "hostId": "1146dff38d7d135d99586b1e56a99cdcdda270d20fcfd89821e17131", "image": {"id": "ab2dea70-7375-4e2d-beda-90f19a5ec15e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ab2dea70-7375-4e2d-beda-90f19a5ec15e"}]}, "flavor": {"id": "e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4"}]}, "created": "2025-12-10T20:18:57Z", "updated": "2025-12-10T20:19:03Z", "addresses": {"": [{"version": 4, "addr": "10.100.1.212", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fb:91:03"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/cc1e9e66-56af-4162-a89f-c97758ee1a64"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/cc1e9e66-56af-4162-a89f-c97758ee1a64"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-10T20:19:03.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000010", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.331 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/cc1e9e66-56af-4162-a89f-c97758ee1a64 used request id req-b170dbc3-2530-4029-8727-9516eca7ca1e request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.332 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cc1e9e66-56af-4162-a89f-c97758ee1a64', 'name': 'te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ab2dea70-7375-4e2d-beda-90f19a5ec15e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e773c65970c34c9db154c6fea65d9fa4', 'user_id': '639468767e8f48a1bd0e3dac90a0ec47', 'hostId': '1146dff38d7d135d99586b1e56a99cdcdda270d20fcfd89821e17131', 'status': 'active', 'metadata': {'metering.server_group': 'bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.333 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.333 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.334 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T20:19:43.334051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.335 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T20:19:43.336830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.367 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.367 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.384 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.384 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.385 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.385 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.385 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.385 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.385 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.386 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.386 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T20:19:43.385767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T20:19:43.386907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.391 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.394 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for cc1e9e66-56af-4162-a89f-c97758ee1a64 / tap191db221-f5 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.394 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.395 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.395 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.395 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.395 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.395 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.396 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.396 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.396 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.396 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.397 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.397 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.397 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.397 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T20:19:43.395986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.397 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.398 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.398 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.398 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.398 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.398 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.399 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.399 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.399 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.bytes volume: 1438 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.399 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.400 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.400 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.400 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.400 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.401 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.401 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.402 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.402 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.402 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.402 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T20:19:43.397718) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.402 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T20:19:43.399155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.403 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T20:19:43.400544) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.404 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T20:19:43.402289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.422 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/memory.usage volume: 42.80859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.449 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/memory.usage volume: 43.51953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.449 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.449 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.450 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.450 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.450 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.450 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.450 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.450 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq>]
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.451 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.451 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.451 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.451 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.451 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.451 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.bytes volume: 1478 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.451 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.bytes volume: 1346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.452 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.452 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.452 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.452 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.452 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.452 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.allocation volume: 30023680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.452 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.453 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.453 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.453 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.453 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.453 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.454 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.454 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.454 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.454 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.454 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.454 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.454 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.454 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.455 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.455 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.455 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.455 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.456 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.456 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.456 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.455 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-10T20:19:43.450304) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.456 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.456 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.457 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.457 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.457 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.456 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T20:19:43.451661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T20:19:43.452719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.458 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T20:19:43.454110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T20:19:43.455024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T20:19:43.456199) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T20:19:43.457154) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.501 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.bytes volume: 29436928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.501 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.544 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.544 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.545 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.545 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.545 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.545 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.545 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.546 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/cpu volume: 310590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.546 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/cpu volume: 38340000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.547 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.547 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.547 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.547 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.547 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.547 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.latency volume: 542055066 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.547 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.latency volume: 53898242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.548 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.latency volume: 615475482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.548 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.latency volume: 54317872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.547 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T20:19:43.545935) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.548 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.548 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.548 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.548 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.548 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.549 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.requests volume: 1055 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.549 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.549 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.549 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.549 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.550 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.550 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.550 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.550 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.550 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.550 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.usage volume: 29753344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.551 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.551 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.551 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.551 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.551 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.551 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.551 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.552 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.552 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.bytes volume: 72704000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.552 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.553 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.553 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.553 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.553 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.553 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.554 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.554 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.554 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.554 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.latency volume: 3651583829 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.554 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.554 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.latency volume: 9167533590 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.555 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T20:19:43.547455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.555 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.555 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.555 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.555 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.555 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.requests volume: 318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.556 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.556 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.requests volume: 297 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.556 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.556 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.556 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.557 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.bytes.delta volume: 126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.557 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.556 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T20:19:43.548948) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.557 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.558 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.558 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.558 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.558 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq>]
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T20:19:43.550384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T20:19:43.551830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T20:19:43.553221) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T20:19:43.554428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T20:19:43.555784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T20:19:43.557152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:43 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:19:43.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-10T20:19:43.558078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:19:46 compute-0 nova_compute[189279]: 2025-12-10 20:19:46.477 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:47 compute-0 nova_compute[189279]: 2025-12-10 20:19:47.202 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:49 compute-0 podman[253680]: 2025-12-10 20:19:49.138265275 +0000 UTC m=+0.111591887 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:19:49 compute-0 podman[253681]: 2025-12-10 20:19:49.140563277 +0000 UTC m=+0.103648793 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 20:19:51 compute-0 nova_compute[189279]: 2025-12-10 20:19:51.480 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:52 compute-0 nova_compute[189279]: 2025-12-10 20:19:52.208 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:56 compute-0 nova_compute[189279]: 2025-12-10 20:19:56.482 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:57 compute-0 nova_compute[189279]: 2025-12-10 20:19:57.210 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:19:58 compute-0 podman[253723]: 2025-12-10 20:19:58.104993327 +0000 UTC m=+0.079565044 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 10 20:19:58 compute-0 podman[253722]: 2025-12-10 20:19:58.126896717 +0000 UTC m=+0.102246215 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:19:58 compute-0 podman[253724]: 2025-12-10 20:19:58.131665576 +0000 UTC m=+0.098831604 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, name=ubi9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, vcs-type=git, container_name=kepler, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 10 20:19:59 compute-0 podman[203484]: time="2025-12-10T20:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:19:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:19:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec 10 20:20:01 compute-0 openstack_network_exporter[205632]: ERROR   20:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:20:01 compute-0 openstack_network_exporter[205632]: ERROR   20:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:20:01 compute-0 openstack_network_exporter[205632]: ERROR   20:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:20:01 compute-0 openstack_network_exporter[205632]: ERROR   20:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:20:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:20:01 compute-0 openstack_network_exporter[205632]: ERROR   20:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:20:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:20:01 compute-0 nova_compute[189279]: 2025-12-10 20:20:01.483 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:02 compute-0 podman[253774]: 2025-12-10 20:20:02.078110565 +0000 UTC m=+0.059910954 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 20:20:02 compute-0 podman[253773]: 2025-12-10 20:20:02.093969673 +0000 UTC m=+0.075915416 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:20:02 compute-0 nova_compute[189279]: 2025-12-10 20:20:02.212 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:03 compute-0 nova_compute[189279]: 2025-12-10 20:20:03.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:20:03 compute-0 nova_compute[189279]: 2025-12-10 20:20:03.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:20:04 compute-0 podman[253810]: 2025-12-10 20:20:04.164288062 +0000 UTC m=+0.138668895 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 10 20:20:06 compute-0 nova_compute[189279]: 2025-12-10 20:20:06.487 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:07 compute-0 nova_compute[189279]: 2025-12-10 20:20:07.214 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:08 compute-0 nova_compute[189279]: 2025-12-10 20:20:08.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:20:08 compute-0 nova_compute[189279]: 2025-12-10 20:20:08.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:20:09 compute-0 nova_compute[189279]: 2025-12-10 20:20:09.484 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:20:10 compute-0 podman[253839]: 2025-12-10 20:20:10.143961254 +0000 UTC m=+0.107656851 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, org.label-schema.build-date=20251210)
Dec 10 20:20:10 compute-0 nova_compute[189279]: 2025-12-10 20:20:10.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:20:10 compute-0 nova_compute[189279]: 2025-12-10 20:20:10.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:20:10 compute-0 nova_compute[189279]: 2025-12-10 20:20:10.695 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:20:10 compute-0 nova_compute[189279]: 2025-12-10 20:20:10.696 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:20:10 compute-0 nova_compute[189279]: 2025-12-10 20:20:10.697 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:20:11 compute-0 nova_compute[189279]: 2025-12-10 20:20:11.491 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.219 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Updating instance_info_cache with network_info: [{"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.221 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.242 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.243 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.243 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.244 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.267 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.267 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.268 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.268 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.341 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.410 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.411 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.476 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.485 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.562 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.564 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.650 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.983 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.984 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4957MB free_disk=72.23331451416016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.985 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:20:12 compute-0 nova_compute[189279]: 2025-12-10 20:20:12.986 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:20:13 compute-0 nova_compute[189279]: 2025-12-10 20:20:13.123 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:20:13 compute-0 nova_compute[189279]: 2025-12-10 20:20:13.123 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance cc1e9e66-56af-4162-a89f-c97758ee1a64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:20:13 compute-0 nova_compute[189279]: 2025-12-10 20:20:13.124 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:20:13 compute-0 nova_compute[189279]: 2025-12-10 20:20:13.124 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:20:13 compute-0 nova_compute[189279]: 2025-12-10 20:20:13.194 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing inventories for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 10 20:20:13 compute-0 nova_compute[189279]: 2025-12-10 20:20:13.212 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating ProviderTree inventory for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 10 20:20:13 compute-0 nova_compute[189279]: 2025-12-10 20:20:13.213 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating inventory in ProviderTree for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 10 20:20:13 compute-0 nova_compute[189279]: 2025-12-10 20:20:13.230 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing aggregate associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 10 20:20:13 compute-0 nova_compute[189279]: 2025-12-10 20:20:13.255 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing trait associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, traits: COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 10 20:20:13 compute-0 nova_compute[189279]: 2025-12-10 20:20:13.320 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:20:13 compute-0 nova_compute[189279]: 2025-12-10 20:20:13.338 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:20:13 compute-0 nova_compute[189279]: 2025-12-10 20:20:13.339 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:20:13 compute-0 nova_compute[189279]: 2025-12-10 20:20:13.340 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:20:13 compute-0 nova_compute[189279]: 2025-12-10 20:20:13.585 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:20:16 compute-0 nova_compute[189279]: 2025-12-10 20:20:16.494 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:17 compute-0 nova_compute[189279]: 2025-12-10 20:20:17.224 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:17 compute-0 nova_compute[189279]: 2025-12-10 20:20:17.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:20:17 compute-0 nova_compute[189279]: 2025-12-10 20:20:17.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:20:17 compute-0 nova_compute[189279]: 2025-12-10 20:20:17.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 10 20:20:20 compute-0 podman[253873]: 2025-12-10 20:20:20.074201966 +0000 UTC m=+0.055106945 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 20:20:20 compute-0 podman[253874]: 2025-12-10 20:20:20.139187867 +0000 UTC m=+0.106270683 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, config_id=edpm, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 10 20:20:21 compute-0 nova_compute[189279]: 2025-12-10 20:20:21.497 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:22 compute-0 nova_compute[189279]: 2025-12-10 20:20:22.229 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:20:23.405 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:20:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:20:23.405 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:20:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:20:23.406 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:20:24 compute-0 nova_compute[189279]: 2025-12-10 20:20:24.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:20:24 compute-0 nova_compute[189279]: 2025-12-10 20:20:24.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 10 20:20:24 compute-0 nova_compute[189279]: 2025-12-10 20:20:24.510 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 10 20:20:24 compute-0 nova_compute[189279]: 2025-12-10 20:20:24.511 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:20:26 compute-0 nova_compute[189279]: 2025-12-10 20:20:26.500 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:27 compute-0 nova_compute[189279]: 2025-12-10 20:20:27.228 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:29 compute-0 podman[253917]: 2025-12-10 20:20:29.149871852 +0000 UTC m=+0.116825198 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, config_id=edpm, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, container_name=kepler, com.redhat.component=ubi9-container, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 10 20:20:29 compute-0 podman[253915]: 2025-12-10 20:20:29.166279464 +0000 UTC m=+0.134515354 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 10 20:20:29 compute-0 podman[253916]: 2025-12-10 20:20:29.184197607 +0000 UTC m=+0.147950467 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 10 20:20:29 compute-0 podman[203484]: time="2025-12-10T20:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:20:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:20:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Dec 10 20:20:31 compute-0 openstack_network_exporter[205632]: ERROR   20:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:20:31 compute-0 openstack_network_exporter[205632]: ERROR   20:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:20:31 compute-0 openstack_network_exporter[205632]: ERROR   20:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:20:31 compute-0 openstack_network_exporter[205632]: ERROR   20:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:20:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:20:31 compute-0 openstack_network_exporter[205632]: ERROR   20:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:20:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:20:31 compute-0 nova_compute[189279]: 2025-12-10 20:20:31.503 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:32 compute-0 nova_compute[189279]: 2025-12-10 20:20:32.230 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:33 compute-0 podman[253967]: 2025-12-10 20:20:33.087085678 +0000 UTC m=+0.070234463 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.vendor=CentOS)
Dec 10 20:20:33 compute-0 podman[253968]: 2025-12-10 20:20:33.092746051 +0000 UTC m=+0.066669857 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 20:20:35 compute-0 podman[254008]: 2025-12-10 20:20:35.117299101 +0000 UTC m=+0.094936097 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 10 20:20:36 compute-0 nova_compute[189279]: 2025-12-10 20:20:36.505 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:37 compute-0 nova_compute[189279]: 2025-12-10 20:20:37.233 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:41 compute-0 podman[254042]: 2025-12-10 20:20:41.195281139 +0000 UTC m=+0.156030014 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251210, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:20:41 compute-0 nova_compute[189279]: 2025-12-10 20:20:41.508 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:42 compute-0 nova_compute[189279]: 2025-12-10 20:20:42.237 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:46 compute-0 nova_compute[189279]: 2025-12-10 20:20:46.511 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:47 compute-0 nova_compute[189279]: 2025-12-10 20:20:47.239 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:51 compute-0 podman[254061]: 2025-12-10 20:20:51.112151835 +0000 UTC m=+0.096923332 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:20:51 compute-0 podman[254062]: 2025-12-10 20:20:51.126280405 +0000 UTC m=+0.102071081 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 10 20:20:51 compute-0 nova_compute[189279]: 2025-12-10 20:20:51.515 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:52 compute-0 nova_compute[189279]: 2025-12-10 20:20:52.242 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:56 compute-0 nova_compute[189279]: 2025-12-10 20:20:56.520 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:57 compute-0 nova_compute[189279]: 2025-12-10 20:20:57.247 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:20:59 compute-0 podman[203484]: time="2025-12-10T20:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:20:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:20:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec 10 20:21:00 compute-0 podman[254103]: 2025-12-10 20:21:00.083897959 +0000 UTC m=+0.063209364 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 10 20:21:00 compute-0 podman[254104]: 2025-12-10 20:21:00.11512278 +0000 UTC m=+0.088859745 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 10 20:21:00 compute-0 podman[254107]: 2025-12-10 20:21:00.137835112 +0000 UTC m=+0.105642817 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, name=ubi9, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0)
Dec 10 20:21:01 compute-0 openstack_network_exporter[205632]: ERROR   20:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:21:01 compute-0 openstack_network_exporter[205632]: ERROR   20:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:21:01 compute-0 openstack_network_exporter[205632]: ERROR   20:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:21:01 compute-0 openstack_network_exporter[205632]: ERROR   20:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:21:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:21:01 compute-0 openstack_network_exporter[205632]: ERROR   20:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:21:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:21:01 compute-0 nova_compute[189279]: 2025-12-10 20:21:01.526 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:02 compute-0 nova_compute[189279]: 2025-12-10 20:21:02.250 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:03 compute-0 nova_compute[189279]: 2025-12-10 20:21:03.523 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:21:04 compute-0 podman[254160]: 2025-12-10 20:21:04.105209621 +0000 UTC m=+0.081583869 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 10 20:21:04 compute-0 podman[254161]: 2025-12-10 20:21:04.12818966 +0000 UTC m=+0.093536130 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:21:04 compute-0 nova_compute[189279]: 2025-12-10 20:21:04.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:21:04 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 10 20:21:06 compute-0 podman[254202]: 2025-12-10 20:21:06.17947027 +0000 UTC m=+0.149118858 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 10 20:21:06 compute-0 nova_compute[189279]: 2025-12-10 20:21:06.528 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:07 compute-0 nova_compute[189279]: 2025-12-10 20:21:07.256 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:08 compute-0 nova_compute[189279]: 2025-12-10 20:21:08.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:21:08 compute-0 nova_compute[189279]: 2025-12-10 20:21:08.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:21:10 compute-0 nova_compute[189279]: 2025-12-10 20:21:10.490 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:21:10 compute-0 nova_compute[189279]: 2025-12-10 20:21:10.491 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:21:10 compute-0 nova_compute[189279]: 2025-12-10 20:21:10.491 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:21:11 compute-0 nova_compute[189279]: 2025-12-10 20:21:11.210 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:21:11 compute-0 nova_compute[189279]: 2025-12-10 20:21:11.211 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:21:11 compute-0 nova_compute[189279]: 2025-12-10 20:21:11.211 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:21:11 compute-0 nova_compute[189279]: 2025-12-10 20:21:11.211 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ca7daa1b-94a2-4e08-902b-73be0ab83974 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:21:11 compute-0 nova_compute[189279]: 2025-12-10 20:21:11.531 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:12 compute-0 podman[254228]: 2025-12-10 20:21:12.120791307 +0000 UTC m=+0.095531945 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.261 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.605 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updating instance_info_cache with network_info: [{"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.627 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.627 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.628 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.628 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.655 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.656 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.656 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.656 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.747 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.850 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.851 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.933 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:21:12 compute-0 nova_compute[189279]: 2025-12-10 20:21:12.953 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.012 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.014 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.073 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.387 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.389 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4942MB free_disk=72.23237991333008GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.389 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.390 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.502 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.502 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance cc1e9e66-56af-4162-a89f-c97758ee1a64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.503 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.503 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.674 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.692 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.694 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:21:13 compute-0 nova_compute[189279]: 2025-12-10 20:21:13.694 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:21:14 compute-0 nova_compute[189279]: 2025-12-10 20:21:14.687 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:21:14 compute-0 nova_compute[189279]: 2025-12-10 20:21:14.688 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:21:16 compute-0 nova_compute[189279]: 2025-12-10 20:21:16.534 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:17 compute-0 nova_compute[189279]: 2025-12-10 20:21:17.265 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:17 compute-0 nova_compute[189279]: 2025-12-10 20:21:17.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:21:19 compute-0 nova_compute[189279]: 2025-12-10 20:21:19.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:21:21 compute-0 nova_compute[189279]: 2025-12-10 20:21:21.538 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:22 compute-0 podman[254259]: 2025-12-10 20:21:22.094875563 +0000 UTC m=+0.077813287 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 20:21:22 compute-0 podman[254260]: 2025-12-10 20:21:22.133771791 +0000 UTC m=+0.107344213 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, release=1755695350, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public)
Dec 10 20:21:22 compute-0 nova_compute[189279]: 2025-12-10 20:21:22.270 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:21:23.407 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:21:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:21:23.407 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:21:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:21:23.407 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:21:26 compute-0 nova_compute[189279]: 2025-12-10 20:21:26.541 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:27 compute-0 nova_compute[189279]: 2025-12-10 20:21:27.273 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:29 compute-0 podman[203484]: time="2025-12-10T20:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:21:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:21:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4817 "" "Go-http-client/1.1"
Dec 10 20:21:31 compute-0 podman[254309]: 2025-12-10 20:21:31.118146405 +0000 UTC m=+0.077357094 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, release=1214.1726694543, release-0.7.12=, io.buildah.version=1.29.0, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public)
Dec 10 20:21:31 compute-0 podman[254303]: 2025-12-10 20:21:31.122164724 +0000 UTC m=+0.090635413 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 10 20:21:31 compute-0 podman[254302]: 2025-12-10 20:21:31.140129848 +0000 UTC m=+0.114106066 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:21:31 compute-0 openstack_network_exporter[205632]: ERROR   20:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:21:31 compute-0 openstack_network_exporter[205632]: ERROR   20:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:21:31 compute-0 openstack_network_exporter[205632]: ERROR   20:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:21:31 compute-0 openstack_network_exporter[205632]: ERROR   20:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:21:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:21:31 compute-0 openstack_network_exporter[205632]: ERROR   20:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:21:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:21:31 compute-0 nova_compute[189279]: 2025-12-10 20:21:31.544 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:32 compute-0 nova_compute[189279]: 2025-12-10 20:21:32.276 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:35 compute-0 podman[254357]: 2025-12-10 20:21:35.105802461 +0000 UTC m=+0.068548948 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:21:35 compute-0 podman[254356]: 2025-12-10 20:21:35.143008513 +0000 UTC m=+0.112223605 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 10 20:21:36 compute-0 nova_compute[189279]: 2025-12-10 20:21:36.546 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:37 compute-0 podman[254397]: 2025-12-10 20:21:37.197053438 +0000 UTC m=+0.175151880 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec 10 20:21:37 compute-0 nova_compute[189279]: 2025-12-10 20:21:37.278 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:41 compute-0 nova_compute[189279]: 2025-12-10 20:21:41.549 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.185 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.186 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.186 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.198 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa519f8c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.199 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ca7daa1b-94a2-4e08-902b-73be0ab83974', 'name': 'te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ab2dea70-7375-4e2d-beda-90f19a5ec15e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e773c65970c34c9db154c6fea65d9fa4', 'user_id': '639468767e8f48a1bd0e3dac90a0ec47', 'hostId': '1146dff38d7d135d99586b1e56a99cdcdda270d20fcfd89821e17131', 'status': 'active', 'metadata': {'metering.server_group': 'bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.205 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cc1e9e66-56af-4162-a89f-c97758ee1a64', 'name': 'te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ab2dea70-7375-4e2d-beda-90f19a5ec15e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e773c65970c34c9db154c6fea65d9fa4', 'user_id': '639468767e8f48a1bd0e3dac90a0ec47', 'hostId': '1146dff38d7d135d99586b1e56a99cdcdda270d20fcfd89821e17131', 'status': 'active', 'metadata': {'metering.server_group': 'bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.206 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.207 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.208 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.208 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T20:21:42.208506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.211 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.211 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.212 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.212 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T20:21:42.212817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.239 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.240 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.261 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.262 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.263 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.264 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.264 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.265 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.265 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.266 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T20:21:42.266002) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.267 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.268 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.269 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.269 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T20:21:42.270172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.278 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 nova_compute[189279]: 2025-12-10 20:21:42.282 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.285 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.286 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.288 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.289 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.290 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T20:21:42.289481) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.291 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.292 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.293 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.293 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.294 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.294 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.295 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T20:21:42.294525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.296 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.296 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.298 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T20:21:42.299462) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.300 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.301 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.303 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.304 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T20:21:42.304628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.305 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.306 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.bytes.delta volume: 182 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.308 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.308 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.309 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T20:21:42.309251) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.351 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/memory.usage volume: 42.52734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.391 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/memory.usage volume: 43.65234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.392 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.392 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.392 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.393 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.393 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.394 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.394 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.395 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.395 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T20:21:42.394736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.395 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.396 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.396 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.397 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.397 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.398 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.398 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.398 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.398 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T20:21:42.398449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.399 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.400 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.400 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.402 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.402 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.402 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.403 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.403 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T20:21:42.403292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.404 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.405 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.405 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.407 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T20:21:42.407045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.408 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.409 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.411 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T20:21:42.410761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.411 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.413 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.414 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.414 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T20:21:42.414402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.477 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.bytes volume: 30525952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.478 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.527 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.527 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.528 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.528 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.528 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.528 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.528 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.528 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/cpu volume: 334080000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.529 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/cpu volume: 156930000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T20:21:42.528790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.529 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.530 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.530 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T20:21:42.530355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.530 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.latency volume: 563933312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.531 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.latency volume: 61232129 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.531 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.latency volume: 615475482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.531 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.latency volume: 54317872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.531 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.532 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.532 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T20:21:42.532133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.532 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.requests volume: 1099 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.533 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.533 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.533 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.533 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.534 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.534 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.534 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.534 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.534 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.535 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.535 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.535 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.535 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.535 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.535 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.536 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.536 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.536 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.536 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.536 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T20:21:42.534139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.537 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T20:21:42.535597) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.537 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.537 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.537 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.538 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.538 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.538 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T20:21:42.537521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.539 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T20:21:42.539110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.539 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.latency volume: 3722115177 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.539 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.539 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.latency volume: 9196700407 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.540 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.540 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.540 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.540 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.540 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.540 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.requests volume: 343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.541 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.541 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.541 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.541 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.542 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.542 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.542 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.542 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T20:21:42.540782) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.543 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.bytes.delta volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T20:21:42.542927) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.544 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.544 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:21:42.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:21:43 compute-0 podman[254424]: 2025-12-10 20:21:43.106196587 +0000 UTC m=+0.089539933 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251210, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4)
Dec 10 20:21:46 compute-0 nova_compute[189279]: 2025-12-10 20:21:46.552 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:47 compute-0 nova_compute[189279]: 2025-12-10 20:21:47.289 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:51 compute-0 nova_compute[189279]: 2025-12-10 20:21:51.555 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:52 compute-0 nova_compute[189279]: 2025-12-10 20:21:52.295 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:53 compute-0 podman[254444]: 2025-12-10 20:21:53.107712843 +0000 UTC m=+0.087173860 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:21:53 compute-0 podman[254445]: 2025-12-10 20:21:53.152216962 +0000 UTC m=+0.122755589 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350)
Dec 10 20:21:56 compute-0 nova_compute[189279]: 2025-12-10 20:21:56.557 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:57 compute-0 nova_compute[189279]: 2025-12-10 20:21:57.299 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:21:59 compute-0 podman[203484]: time="2025-12-10T20:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:21:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:21:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec 10 20:22:01 compute-0 openstack_network_exporter[205632]: ERROR   20:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:22:01 compute-0 openstack_network_exporter[205632]: ERROR   20:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:22:01 compute-0 openstack_network_exporter[205632]: ERROR   20:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:22:01 compute-0 openstack_network_exporter[205632]: ERROR   20:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:22:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:22:01 compute-0 openstack_network_exporter[205632]: ERROR   20:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:22:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:22:01 compute-0 nova_compute[189279]: 2025-12-10 20:22:01.560 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:02 compute-0 podman[254490]: 2025-12-10 20:22:02.141321413 +0000 UTC m=+0.096547562 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., release=1214.1726694543, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, version=9.4, maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64)
Dec 10 20:22:02 compute-0 podman[254488]: 2025-12-10 20:22:02.149074503 +0000 UTC m=+0.111854885 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 10 20:22:02 compute-0 podman[254489]: 2025-12-10 20:22:02.179646565 +0000 UTC m=+0.135812179 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:22:02 compute-0 nova_compute[189279]: 2025-12-10 20:22:02.301 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:04 compute-0 nova_compute[189279]: 2025-12-10 20:22:04.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:22:04 compute-0 nova_compute[189279]: 2025-12-10 20:22:04.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:22:06 compute-0 podman[254545]: 2025-12-10 20:22:06.074141082 +0000 UTC m=+0.051680864 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 20:22:06 compute-0 podman[254544]: 2025-12-10 20:22:06.08153129 +0000 UTC m=+0.062716430 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 10 20:22:06 compute-0 nova_compute[189279]: 2025-12-10 20:22:06.562 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:07 compute-0 nova_compute[189279]: 2025-12-10 20:22:07.304 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:08 compute-0 podman[254588]: 2025-12-10 20:22:08.176481557 +0000 UTC m=+0.138114861 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 10 20:22:10 compute-0 nova_compute[189279]: 2025-12-10 20:22:10.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:22:10 compute-0 nova_compute[189279]: 2025-12-10 20:22:10.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:22:11 compute-0 nova_compute[189279]: 2025-12-10 20:22:11.485 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:22:11 compute-0 nova_compute[189279]: 2025-12-10 20:22:11.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:22:11 compute-0 nova_compute[189279]: 2025-12-10 20:22:11.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:22:11 compute-0 nova_compute[189279]: 2025-12-10 20:22:11.566 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:12 compute-0 nova_compute[189279]: 2025-12-10 20:22:12.237 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:22:12 compute-0 nova_compute[189279]: 2025-12-10 20:22:12.238 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:22:12 compute-0 nova_compute[189279]: 2025-12-10 20:22:12.239 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:22:12 compute-0 nova_compute[189279]: 2025-12-10 20:22:12.306 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:13 compute-0 nova_compute[189279]: 2025-12-10 20:22:13.674 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Updating instance_info_cache with network_info: [{"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:22:13 compute-0 nova_compute[189279]: 2025-12-10 20:22:13.698 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:22:13 compute-0 nova_compute[189279]: 2025-12-10 20:22:13.699 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:22:13 compute-0 nova_compute[189279]: 2025-12-10 20:22:13.701 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:22:13 compute-0 nova_compute[189279]: 2025-12-10 20:22:13.702 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:22:13 compute-0 nova_compute[189279]: 2025-12-10 20:22:13.737 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:22:13 compute-0 nova_compute[189279]: 2025-12-10 20:22:13.737 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:22:13 compute-0 nova_compute[189279]: 2025-12-10 20:22:13.738 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:22:13 compute-0 nova_compute[189279]: 2025-12-10 20:22:13.738 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:22:13 compute-0 nova_compute[189279]: 2025-12-10 20:22:13.818 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:22:13 compute-0 nova_compute[189279]: 2025-12-10 20:22:13.881 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:22:13 compute-0 nova_compute[189279]: 2025-12-10 20:22:13.882 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:22:13 compute-0 nova_compute[189279]: 2025-12-10 20:22:13.976 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:22:13 compute-0 nova_compute[189279]: 2025-12-10 20:22:13.989 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.051 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.052 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:22:14 compute-0 podman[254622]: 2025-12-10 20:22:14.093618002 +0000 UTC m=+0.078258279 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.116 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.435 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.436 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4943MB free_disk=72.2323989868164GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.436 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.437 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.512 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.513 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance cc1e9e66-56af-4162-a89f-c97758ee1a64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.513 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.513 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.570 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.592 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.594 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:22:14 compute-0 nova_compute[189279]: 2025-12-10 20:22:14.594 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:22:16 compute-0 nova_compute[189279]: 2025-12-10 20:22:16.568 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:17 compute-0 nova_compute[189279]: 2025-12-10 20:22:17.309 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:17 compute-0 nova_compute[189279]: 2025-12-10 20:22:17.380 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:22:19 compute-0 nova_compute[189279]: 2025-12-10 20:22:19.490 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:22:21 compute-0 nova_compute[189279]: 2025-12-10 20:22:21.571 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:22 compute-0 nova_compute[189279]: 2025-12-10 20:22:22.312 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:22:23.408 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:22:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:22:23.409 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:22:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:22:23.410 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:22:24 compute-0 podman[254648]: 2025-12-10 20:22:24.124437987 +0000 UTC m=+0.094521808 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 20:22:24 compute-0 podman[254647]: 2025-12-10 20:22:24.149819991 +0000 UTC m=+0.126317534 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:22:26 compute-0 nova_compute[189279]: 2025-12-10 20:22:26.574 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:27 compute-0 nova_compute[189279]: 2025-12-10 20:22:27.317 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:29 compute-0 podman[203484]: time="2025-12-10T20:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:22:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:22:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Dec 10 20:22:31 compute-0 openstack_network_exporter[205632]: ERROR   20:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:22:31 compute-0 openstack_network_exporter[205632]: ERROR   20:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:22:31 compute-0 openstack_network_exporter[205632]: ERROR   20:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:22:31 compute-0 openstack_network_exporter[205632]: ERROR   20:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:22:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:22:31 compute-0 openstack_network_exporter[205632]: ERROR   20:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:22:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:22:31 compute-0 nova_compute[189279]: 2025-12-10 20:22:31.577 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:32 compute-0 nova_compute[189279]: 2025-12-10 20:22:32.321 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:33 compute-0 podman[254689]: 2025-12-10 20:22:33.130737232 +0000 UTC m=+0.092121003 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:22:33 compute-0 podman[254688]: 2025-12-10 20:22:33.142314673 +0000 UTC m=+0.113457667 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 10 20:22:33 compute-0 podman[254695]: 2025-12-10 20:22:33.165823427 +0000 UTC m=+0.115358808 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.buildah.version=1.29.0, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 10 20:22:36 compute-0 nova_compute[189279]: 2025-12-10 20:22:36.580 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:37 compute-0 podman[254745]: 2025-12-10 20:22:37.12740277 +0000 UTC m=+0.089998046 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 20:22:37 compute-0 sshd-session[254742]: Invalid user solv from 80.94.92.184 port 39218
Dec 10 20:22:37 compute-0 podman[254744]: 2025-12-10 20:22:37.198340801 +0000 UTC m=+0.163194137 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec 10 20:22:37 compute-0 nova_compute[189279]: 2025-12-10 20:22:37.323 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:37 compute-0 sshd-session[254742]: Connection closed by invalid user solv 80.94.92.184 port 39218 [preauth]
Dec 10 20:22:39 compute-0 podman[254784]: 2025-12-10 20:22:39.171938799 +0000 UTC m=+0.133707494 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:22:41 compute-0 nova_compute[189279]: 2025-12-10 20:22:41.583 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:42 compute-0 nova_compute[189279]: 2025-12-10 20:22:42.326 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:44 compute-0 podman[254810]: 2025-12-10 20:22:44.78901147 +0000 UTC m=+0.103866310 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 10 20:22:46 compute-0 nova_compute[189279]: 2025-12-10 20:22:46.586 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:47 compute-0 nova_compute[189279]: 2025-12-10 20:22:47.328 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:51 compute-0 nova_compute[189279]: 2025-12-10 20:22:51.589 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:52 compute-0 nova_compute[189279]: 2025-12-10 20:22:52.331 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:55 compute-0 podman[254831]: 2025-12-10 20:22:55.098347408 +0000 UTC m=+0.066373870 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 20:22:55 compute-0 podman[254832]: 2025-12-10 20:22:55.124909024 +0000 UTC m=+0.088291231 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Dec 10 20:22:56 compute-0 nova_compute[189279]: 2025-12-10 20:22:56.591 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:57 compute-0 nova_compute[189279]: 2025-12-10 20:22:57.334 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:22:59 compute-0 podman[203484]: time="2025-12-10T20:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:22:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:22:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Dec 10 20:23:01 compute-0 openstack_network_exporter[205632]: ERROR   20:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:23:01 compute-0 openstack_network_exporter[205632]: ERROR   20:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:23:01 compute-0 openstack_network_exporter[205632]: ERROR   20:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:23:01 compute-0 openstack_network_exporter[205632]: ERROR   20:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:23:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:23:01 compute-0 openstack_network_exporter[205632]: ERROR   20:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:23:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:23:01 compute-0 nova_compute[189279]: 2025-12-10 20:23:01.594 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:02 compute-0 nova_compute[189279]: 2025-12-10 20:23:02.337 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:04 compute-0 podman[254876]: 2025-12-10 20:23:04.469728818 +0000 UTC m=+0.097977980 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.tags=base rhel9, name=ubi9, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release-0.7.12=, maintainer=Red Hat, Inc.)
Dec 10 20:23:04 compute-0 podman[254874]: 2025-12-10 20:23:04.476646265 +0000 UTC m=+0.112367949 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 10 20:23:04 compute-0 podman[254875]: 2025-12-10 20:23:04.490716294 +0000 UTC m=+0.114072915 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 10 20:23:06 compute-0 nova_compute[189279]: 2025-12-10 20:23:06.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:23:06 compute-0 nova_compute[189279]: 2025-12-10 20:23:06.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:23:06 compute-0 nova_compute[189279]: 2025-12-10 20:23:06.597 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:07 compute-0 nova_compute[189279]: 2025-12-10 20:23:07.339 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:08 compute-0 podman[254930]: 2025-12-10 20:23:08.102043881 +0000 UTC m=+0.083827380 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:23:08 compute-0 podman[254931]: 2025-12-10 20:23:08.118223307 +0000 UTC m=+0.085773273 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 20:23:10 compute-0 podman[254971]: 2025-12-10 20:23:10.182371824 +0000 UTC m=+0.152730826 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 10 20:23:10 compute-0 nova_compute[189279]: 2025-12-10 20:23:10.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:23:10 compute-0 nova_compute[189279]: 2025-12-10 20:23:10.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:23:11 compute-0 nova_compute[189279]: 2025-12-10 20:23:11.600 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:12 compute-0 nova_compute[189279]: 2025-12-10 20:23:12.341 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:12 compute-0 nova_compute[189279]: 2025-12-10 20:23:12.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:23:13 compute-0 nova_compute[189279]: 2025-12-10 20:23:13.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:23:13 compute-0 nova_compute[189279]: 2025-12-10 20:23:13.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:23:13 compute-0 nova_compute[189279]: 2025-12-10 20:23:13.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:23:14 compute-0 nova_compute[189279]: 2025-12-10 20:23:14.241 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:23:14 compute-0 nova_compute[189279]: 2025-12-10 20:23:14.242 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:23:14 compute-0 nova_compute[189279]: 2025-12-10 20:23:14.243 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:23:14 compute-0 nova_compute[189279]: 2025-12-10 20:23:14.243 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ca7daa1b-94a2-4e08-902b-73be0ab83974 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:23:15 compute-0 podman[254994]: 2025-12-10 20:23:15.104335879 +0000 UTC m=+0.081860876 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 10 20:23:15 compute-0 nova_compute[189279]: 2025-12-10 20:23:15.726 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updating instance_info_cache with network_info: [{"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:23:15 compute-0 nova_compute[189279]: 2025-12-10 20:23:15.744 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:23:15 compute-0 nova_compute[189279]: 2025-12-10 20:23:15.745 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:23:15 compute-0 nova_compute[189279]: 2025-12-10 20:23:15.747 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:23:15 compute-0 nova_compute[189279]: 2025-12-10 20:23:15.748 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:23:15 compute-0 nova_compute[189279]: 2025-12-10 20:23:15.773 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:23:15 compute-0 nova_compute[189279]: 2025-12-10 20:23:15.774 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:23:15 compute-0 nova_compute[189279]: 2025-12-10 20:23:15.775 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:23:15 compute-0 nova_compute[189279]: 2025-12-10 20:23:15.776 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:23:15 compute-0 nova_compute[189279]: 2025-12-10 20:23:15.847 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:23:15 compute-0 nova_compute[189279]: 2025-12-10 20:23:15.909 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:23:15 compute-0 nova_compute[189279]: 2025-12-10 20:23:15.911 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:23:15 compute-0 nova_compute[189279]: 2025-12-10 20:23:15.972 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:23:15 compute-0 nova_compute[189279]: 2025-12-10 20:23:15.981 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.057 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.059 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.119 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.426 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.427 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4876MB free_disk=72.23237991333008GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.428 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.428 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.503 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.503 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance cc1e9e66-56af-4162-a89f-c97758ee1a64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.504 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.504 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.559 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.571 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.573 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.574 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:23:16 compute-0 nova_compute[189279]: 2025-12-10 20:23:16.603 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:17 compute-0 nova_compute[189279]: 2025-12-10 20:23:17.314 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:23:17 compute-0 nova_compute[189279]: 2025-12-10 20:23:17.344 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:19 compute-0 nova_compute[189279]: 2025-12-10 20:23:19.484 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:23:20 compute-0 nova_compute[189279]: 2025-12-10 20:23:20.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:23:21 compute-0 nova_compute[189279]: 2025-12-10 20:23:21.604 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:22 compute-0 nova_compute[189279]: 2025-12-10 20:23:22.345 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:23:23.410 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:23:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:23:23.410 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:23:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:23:23.411 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:23:26 compute-0 podman[255026]: 2025-12-10 20:23:26.149812938 +0000 UTC m=+0.128015650 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:23:26 compute-0 podman[255027]: 2025-12-10 20:23:26.157143785 +0000 UTC m=+0.129782947 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., release=1755695350, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, managed_by=edpm_ansible, distribution-scope=public, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7)
Dec 10 20:23:26 compute-0 nova_compute[189279]: 2025-12-10 20:23:26.607 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:27 compute-0 nova_compute[189279]: 2025-12-10 20:23:27.348 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:29 compute-0 podman[203484]: time="2025-12-10T20:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:23:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:23:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec 10 20:23:31 compute-0 openstack_network_exporter[205632]: ERROR   20:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:23:31 compute-0 openstack_network_exporter[205632]: ERROR   20:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:23:31 compute-0 openstack_network_exporter[205632]: ERROR   20:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:23:31 compute-0 openstack_network_exporter[205632]: ERROR   20:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:23:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:23:31 compute-0 openstack_network_exporter[205632]: ERROR   20:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:23:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:23:31 compute-0 nova_compute[189279]: 2025-12-10 20:23:31.610 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:32 compute-0 nova_compute[189279]: 2025-12-10 20:23:32.352 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:35 compute-0 podman[255071]: 2025-12-10 20:23:35.142120726 +0000 UTC m=+0.092124672 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, container_name=kepler, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, name=ubi9, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Dec 10 20:23:35 compute-0 podman[255070]: 2025-12-10 20:23:35.142228819 +0000 UTC m=+0.106226032 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:23:35 compute-0 podman[255069]: 2025-12-10 20:23:35.146532755 +0000 UTC m=+0.109002807 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Dec 10 20:23:36 compute-0 nova_compute[189279]: 2025-12-10 20:23:36.613 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:37 compute-0 nova_compute[189279]: 2025-12-10 20:23:37.355 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:39 compute-0 podman[255123]: 2025-12-10 20:23:39.13601469 +0000 UTC m=+0.108842064 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec 10 20:23:39 compute-0 podman[255124]: 2025-12-10 20:23:39.151675191 +0000 UTC m=+0.110309252 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 20:23:41 compute-0 podman[255164]: 2025-12-10 20:23:41.13790634 +0000 UTC m=+0.120687053 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:23:41 compute-0 nova_compute[189279]: 2025-12-10 20:23:41.617 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.186 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.188 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.195 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ca7daa1b-94a2-4e08-902b-73be0ab83974', 'name': 'te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ab2dea70-7375-4e2d-beda-90f19a5ec15e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e773c65970c34c9db154c6fea65d9fa4', 'user_id': '639468767e8f48a1bd0e3dac90a0ec47', 'hostId': '1146dff38d7d135d99586b1e56a99cdcdda270d20fcfd89821e17131', 'status': 'active', 'metadata': {'metering.server_group': 'bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.198 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cc1e9e66-56af-4162-a89f-c97758ee1a64', 'name': 'te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ab2dea70-7375-4e2d-beda-90f19a5ec15e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e773c65970c34c9db154c6fea65d9fa4', 'user_id': '639468767e8f48a1bd0e3dac90a0ec47', 'hostId': '1146dff38d7d135d99586b1e56a99cdcdda270d20fcfd89821e17131', 'status': 'active', 'metadata': {'metering.server_group': 'bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.198 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.198 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.198 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T20:23:42.199046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.200 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.200 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.200 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.201 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T20:23:42.200672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.216 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.216 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.232 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.233 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.233 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.233 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.233 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.233 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.233 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.233 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.234 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T20:23:42.233914) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.234 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.234 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.234 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.234 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.234 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.234 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.235 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T20:23:42.234844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.238 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.241 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.242 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.242 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.242 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.242 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.242 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.242 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.242 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T20:23:42.242533) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.242 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.243 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.243 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.243 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.243 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.243 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.243 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.244 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.244 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T20:23:42.243989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.244 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.244 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.244 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.244 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.245 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.245 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.245 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.245 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.245 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.245 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.246 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.246 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T20:23:42.245357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.246 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.246 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.246 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.246 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.247 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.247 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T20:23:42.246877) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.247 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.247 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.247 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.248 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.248 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.248 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.248 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.248 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T20:23:42.248270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.268 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/memory.usage volume: 42.52734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.287 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/memory.usage volume: 43.65234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.288 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.288 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.288 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.288 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.289 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T20:23:42.288723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.289 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.289 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.289 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.289 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.289 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.290 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T20:23:42.289916) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.290 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.290 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.290 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.291 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.291 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.291 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.291 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.291 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.291 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T20:23:42.291494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.291 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.292 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.292 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.292 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.292 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.292 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.293 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T20:23:42.292759) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.293 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.293 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.293 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.293 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.293 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.294 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.294 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.294 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.294 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.294 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.294 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.295 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.295 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T20:23:42.293994) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T20:23:42.295133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.325 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.bytes volume: 30525952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.326 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 nova_compute[189279]: 2025-12-10 20:23:42.357 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.362 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.362 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.363 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.363 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.363 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.363 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/cpu volume: 335560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.364 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/cpu volume: 276680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T20:23:42.363787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.364 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.365 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T20:23:42.364969) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.365 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.latency volume: 563933312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.365 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.latency volume: 61232129 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.365 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.latency volume: 615475482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.365 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.latency volume: 54317872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.366 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.366 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T20:23:42.366505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.366 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.requests volume: 1099 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.366 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.367 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.367 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.367 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.368 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.368 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.368 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.368 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.368 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T20:23:42.368327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.368 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.369 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.369 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.369 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.369 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.369 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.369 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.370 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.370 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T20:23:42.369979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.370 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.370 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.370 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.371 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.371 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.371 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.371 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.371 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T20:23:42.371762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.372 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.372 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.372 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.372 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.372 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.373 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.373 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.latency volume: 3722115177 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.373 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.373 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T20:23:42.373017) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.373 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.latency volume: 9196700407 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.373 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.374 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.374 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.375 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.requests volume: 343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T20:23:42.374827) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.375 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.375 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.375 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.376 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.376 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.376 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.376 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.377 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.377 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T20:23:42.376683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.377 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.377 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.378 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.378 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.378 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.378 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.378 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.378 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.378 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.378 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.378 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.379 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.379 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.379 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.379 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.379 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.379 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.379 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.379 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.379 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.379 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.379 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.379 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.379 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.379 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.380 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.380 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:23:42.380 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:23:46 compute-0 podman[255191]: 2025-12-10 20:23:46.093237334 +0000 UTC m=+0.069822912 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 10 20:23:46 compute-0 nova_compute[189279]: 2025-12-10 20:23:46.619 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:47 compute-0 nova_compute[189279]: 2025-12-10 20:23:47.359 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:51 compute-0 nova_compute[189279]: 2025-12-10 20:23:51.622 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:52 compute-0 nova_compute[189279]: 2025-12-10 20:23:52.361 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:56 compute-0 nova_compute[189279]: 2025-12-10 20:23:56.625 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:57 compute-0 podman[255210]: 2025-12-10 20:23:57.098552141 +0000 UTC m=+0.085101054 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 20:23:57 compute-0 podman[255211]: 2025-12-10 20:23:57.099079305 +0000 UTC m=+0.078231469 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_id=edpm, version=9.6, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9)
Dec 10 20:23:57 compute-0 nova_compute[189279]: 2025-12-10 20:23:57.362 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:23:59 compute-0 podman[203484]: time="2025-12-10T20:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:23:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:23:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4817 "" "Go-http-client/1.1"
Dec 10 20:24:01 compute-0 openstack_network_exporter[205632]: ERROR   20:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:24:01 compute-0 openstack_network_exporter[205632]: ERROR   20:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:24:01 compute-0 openstack_network_exporter[205632]: ERROR   20:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:24:01 compute-0 openstack_network_exporter[205632]: ERROR   20:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:24:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:24:01 compute-0 openstack_network_exporter[205632]: ERROR   20:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:24:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:24:01 compute-0 nova_compute[189279]: 2025-12-10 20:24:01.629 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:02 compute-0 nova_compute[189279]: 2025-12-10 20:24:02.364 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:06 compute-0 podman[255253]: 2025-12-10 20:24:06.09238548 +0000 UTC m=+0.069198105 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 10 20:24:06 compute-0 podman[255255]: 2025-12-10 20:24:06.102102551 +0000 UTC m=+0.075799322 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, name=ubi9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.expose-services=)
Dec 10 20:24:06 compute-0 podman[255254]: 2025-12-10 20:24:06.120409715 +0000 UTC m=+0.098562936 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:24:06 compute-0 nova_compute[189279]: 2025-12-10 20:24:06.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:24:06 compute-0 nova_compute[189279]: 2025-12-10 20:24:06.632 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:07 compute-0 nova_compute[189279]: 2025-12-10 20:24:07.367 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:07 compute-0 nova_compute[189279]: 2025-12-10 20:24:07.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:24:10 compute-0 podman[255306]: 2025-12-10 20:24:10.098621066 +0000 UTC m=+0.079365619 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS)
Dec 10 20:24:10 compute-0 podman[255307]: 2025-12-10 20:24:10.106805327 +0000 UTC m=+0.078248399 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 20:24:11 compute-0 nova_compute[189279]: 2025-12-10 20:24:11.636 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:12 compute-0 podman[255349]: 2025-12-10 20:24:12.104631396 +0000 UTC m=+0.087424635 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 10 20:24:12 compute-0 nova_compute[189279]: 2025-12-10 20:24:12.370 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:12 compute-0 nova_compute[189279]: 2025-12-10 20:24:12.482 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:24:12 compute-0 nova_compute[189279]: 2025-12-10 20:24:12.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:24:12 compute-0 nova_compute[189279]: 2025-12-10 20:24:12.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:24:14 compute-0 nova_compute[189279]: 2025-12-10 20:24:14.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:24:14 compute-0 nova_compute[189279]: 2025-12-10 20:24:14.522 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:24:14 compute-0 nova_compute[189279]: 2025-12-10 20:24:14.523 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:24:14 compute-0 nova_compute[189279]: 2025-12-10 20:24:14.523 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:24:14 compute-0 nova_compute[189279]: 2025-12-10 20:24:14.524 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:24:14 compute-0 nova_compute[189279]: 2025-12-10 20:24:14.609 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:24:14 compute-0 nova_compute[189279]: 2025-12-10 20:24:14.689 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:24:14 compute-0 nova_compute[189279]: 2025-12-10 20:24:14.691 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:24:14 compute-0 nova_compute[189279]: 2025-12-10 20:24:14.755 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:24:14 compute-0 nova_compute[189279]: 2025-12-10 20:24:14.764 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:24:14 compute-0 nova_compute[189279]: 2025-12-10 20:24:14.827 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:24:14 compute-0 nova_compute[189279]: 2025-12-10 20:24:14.829 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:24:14 compute-0 nova_compute[189279]: 2025-12-10 20:24:14.889 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:24:15 compute-0 nova_compute[189279]: 2025-12-10 20:24:15.183 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:24:15 compute-0 nova_compute[189279]: 2025-12-10 20:24:15.184 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4878MB free_disk=72.23245620727539GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:24:15 compute-0 nova_compute[189279]: 2025-12-10 20:24:15.184 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:24:15 compute-0 nova_compute[189279]: 2025-12-10 20:24:15.185 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:24:15 compute-0 nova_compute[189279]: 2025-12-10 20:24:15.276 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:24:15 compute-0 nova_compute[189279]: 2025-12-10 20:24:15.276 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance cc1e9e66-56af-4162-a89f-c97758ee1a64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:24:15 compute-0 nova_compute[189279]: 2025-12-10 20:24:15.277 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:24:15 compute-0 nova_compute[189279]: 2025-12-10 20:24:15.277 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:24:15 compute-0 nova_compute[189279]: 2025-12-10 20:24:15.329 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:24:15 compute-0 nova_compute[189279]: 2025-12-10 20:24:15.346 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:24:15 compute-0 nova_compute[189279]: 2025-12-10 20:24:15.348 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:24:15 compute-0 nova_compute[189279]: 2025-12-10 20:24:15.348 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:24:16 compute-0 nova_compute[189279]: 2025-12-10 20:24:16.350 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:24:16 compute-0 nova_compute[189279]: 2025-12-10 20:24:16.350 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:24:16 compute-0 podman[255387]: 2025-12-10 20:24:16.446212697 +0000 UTC m=+0.063005409 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 10 20:24:16 compute-0 nova_compute[189279]: 2025-12-10 20:24:16.639 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:17 compute-0 nova_compute[189279]: 2025-12-10 20:24:17.250 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:24:17 compute-0 nova_compute[189279]: 2025-12-10 20:24:17.251 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:24:17 compute-0 nova_compute[189279]: 2025-12-10 20:24:17.251 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:24:17 compute-0 nova_compute[189279]: 2025-12-10 20:24:17.372 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:19 compute-0 nova_compute[189279]: 2025-12-10 20:24:19.393 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Updating instance_info_cache with network_info: [{"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:24:19 compute-0 nova_compute[189279]: 2025-12-10 20:24:19.409 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:24:19 compute-0 nova_compute[189279]: 2025-12-10 20:24:19.409 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:24:19 compute-0 nova_compute[189279]: 2025-12-10 20:24:19.410 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:24:19 compute-0 nova_compute[189279]: 2025-12-10 20:24:19.410 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:24:21 compute-0 nova_compute[189279]: 2025-12-10 20:24:21.643 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:22 compute-0 nova_compute[189279]: 2025-12-10 20:24:22.375 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:22 compute-0 nova_compute[189279]: 2025-12-10 20:24:22.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:24:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:24:23.411 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:24:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:24:23.412 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:24:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:24:23.412 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:24:26 compute-0 nova_compute[189279]: 2025-12-10 20:24:26.645 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:27 compute-0 nova_compute[189279]: 2025-12-10 20:24:27.377 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:28 compute-0 podman[255407]: 2025-12-10 20:24:28.133123436 +0000 UTC m=+0.116713905 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 20:24:28 compute-0 podman[255408]: 2025-12-10 20:24:28.144086031 +0000 UTC m=+0.116425247 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.6, managed_by=edpm_ansible)
Dec 10 20:24:29 compute-0 podman[203484]: time="2025-12-10T20:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:24:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:24:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4815 "" "Go-http-client/1.1"
Dec 10 20:24:31 compute-0 openstack_network_exporter[205632]: ERROR   20:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:24:31 compute-0 openstack_network_exporter[205632]: ERROR   20:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:24:31 compute-0 openstack_network_exporter[205632]: ERROR   20:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:24:31 compute-0 openstack_network_exporter[205632]: ERROR   20:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:24:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:24:31 compute-0 openstack_network_exporter[205632]: ERROR   20:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:24:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:24:31 compute-0 nova_compute[189279]: 2025-12-10 20:24:31.648 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:32 compute-0 nova_compute[189279]: 2025-12-10 20:24:32.379 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.488 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.488 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.489 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.489 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.490 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.490 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.522 189283 DEBUG nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.537 189283 DEBUG nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.537 189283 DEBUG nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Image id ab2dea70-7375-4e2d-beda-90f19a5ec15e yields fingerprint 53f56b563801b5ea0f834b33920c5e6aa39aeede _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.538 189283 INFO nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] image ab2dea70-7375-4e2d-beda-90f19a5ec15e at (/var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede): checking
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.538 189283 DEBUG nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] image ab2dea70-7375-4e2d-beda-90f19a5ec15e at (/var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.542 189283 DEBUG nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.542 189283 DEBUG nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] ca7daa1b-94a2-4e08-902b-73be0ab83974 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.543 189283 DEBUG nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] ca7daa1b-94a2-4e08-902b-73be0ab83974 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.543 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.609 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.611 189283 DEBUG nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 is backed by 53f56b563801b5ea0f834b33920c5e6aa39aeede _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.611 189283 DEBUG nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] cc1e9e66-56af-4162-a89f-c97758ee1a64 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.612 189283 DEBUG nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] cc1e9e66-56af-4162-a89f-c97758ee1a64 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.613 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.676 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.678 189283 DEBUG nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance cc1e9e66-56af-4162-a89f-c97758ee1a64 is backed by 53f56b563801b5ea0f834b33920c5e6aa39aeede _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.679 189283 WARNING nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.679 189283 WARNING nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.680 189283 WARNING nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.680 189283 INFO nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Active base files: /var/lib/nova/instances/_base/53f56b563801b5ea0f834b33920c5e6aa39aeede
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.681 189283 INFO nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Removable base files: /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9 /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6 /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.682 189283 INFO nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/193edf3941027c090c206b4992bbea3ae5563eb9
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.683 189283 INFO nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/490d50a9caa1916c71e31166385320ae93d214b6
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.683 189283 INFO nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/6f27c3b74299e89bd51ef4292a29b048cf6b0905
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.684 189283 DEBUG nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.684 189283 DEBUG nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.685 189283 DEBUG nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Dec 10 20:24:34 compute-0 nova_compute[189279]: 2025-12-10 20:24:34.685 189283 INFO nova.virt.libvirt.imagecache [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Dec 10 20:24:36 compute-0 nova_compute[189279]: 2025-12-10 20:24:36.651 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:37 compute-0 podman[255455]: 2025-12-10 20:24:37.105921169 +0000 UTC m=+0.085877005 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:24:37 compute-0 podman[255456]: 2025-12-10 20:24:37.110119312 +0000 UTC m=+0.087263573 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 10 20:24:37 compute-0 podman[255457]: 2025-12-10 20:24:37.142445523 +0000 UTC m=+0.113125199 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30)
Dec 10 20:24:37 compute-0 nova_compute[189279]: 2025-12-10 20:24:37.381 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:41 compute-0 podman[255513]: 2025-12-10 20:24:41.129242944 +0000 UTC m=+0.088259158 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 20:24:41 compute-0 podman[255512]: 2025-12-10 20:24:41.138099793 +0000 UTC m=+0.109450240 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 10 20:24:41 compute-0 nova_compute[189279]: 2025-12-10 20:24:41.654 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:42 compute-0 nova_compute[189279]: 2025-12-10 20:24:42.384 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:43 compute-0 podman[255553]: 2025-12-10 20:24:43.197687177 +0000 UTC m=+0.160153595 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 10 20:24:46 compute-0 nova_compute[189279]: 2025-12-10 20:24:46.656 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:47 compute-0 podman[255578]: 2025-12-10 20:24:47.143184507 +0000 UTC m=+0.105725479 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:24:47 compute-0 nova_compute[189279]: 2025-12-10 20:24:47.387 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:51 compute-0 nova_compute[189279]: 2025-12-10 20:24:51.659 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:52 compute-0 nova_compute[189279]: 2025-12-10 20:24:52.389 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:56 compute-0 nova_compute[189279]: 2025-12-10 20:24:56.663 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:57 compute-0 nova_compute[189279]: 2025-12-10 20:24:57.393 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:24:59 compute-0 podman[255601]: 2025-12-10 20:24:59.095013432 +0000 UTC m=+0.075159105 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, distribution-scope=public, io.openshift.expose-services=, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 20:24:59 compute-0 podman[255600]: 2025-12-10 20:24:59.11719905 +0000 UTC m=+0.091894366 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:24:59 compute-0 podman[203484]: time="2025-12-10T20:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:24:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:24:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4817 "" "Go-http-client/1.1"
Dec 10 20:25:01 compute-0 openstack_network_exporter[205632]: ERROR   20:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:25:01 compute-0 openstack_network_exporter[205632]: ERROR   20:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:25:01 compute-0 openstack_network_exporter[205632]: ERROR   20:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:25:01 compute-0 openstack_network_exporter[205632]: ERROR   20:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:25:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:25:01 compute-0 openstack_network_exporter[205632]: ERROR   20:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:25:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:25:01 compute-0 nova_compute[189279]: 2025-12-10 20:25:01.665 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:02 compute-0 nova_compute[189279]: 2025-12-10 20:25:02.395 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:06 compute-0 nova_compute[189279]: 2025-12-10 20:25:06.668 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:07 compute-0 nova_compute[189279]: 2025-12-10 20:25:07.397 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:07 compute-0 nova_compute[189279]: 2025-12-10 20:25:07.687 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:25:08 compute-0 podman[255644]: 2025-12-10 20:25:08.102208772 +0000 UTC m=+0.077270983 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi)
Dec 10 20:25:08 compute-0 podman[255645]: 2025-12-10 20:25:08.102992463 +0000 UTC m=+0.074162099 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 10 20:25:08 compute-0 podman[255643]: 2025-12-10 20:25:08.11736268 +0000 UTC m=+0.099191963 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 10 20:25:08 compute-0 nova_compute[189279]: 2025-12-10 20:25:08.491 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:25:11 compute-0 nova_compute[189279]: 2025-12-10 20:25:11.671 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:12 compute-0 podman[255701]: 2025-12-10 20:25:12.084830482 +0000 UTC m=+0.059975947 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:25:12 compute-0 podman[255700]: 2025-12-10 20:25:12.089865058 +0000 UTC m=+0.069696399 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 10 20:25:12 compute-0 nova_compute[189279]: 2025-12-10 20:25:12.399 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:12 compute-0 nova_compute[189279]: 2025-12-10 20:25:12.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:25:12 compute-0 nova_compute[189279]: 2025-12-10 20:25:12.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:25:13 compute-0 nova_compute[189279]: 2025-12-10 20:25:13.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:25:14 compute-0 podman[255743]: 2025-12-10 20:25:14.156106791 +0000 UTC m=+0.117820495 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:25:16 compute-0 nova_compute[189279]: 2025-12-10 20:25:16.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:25:16 compute-0 nova_compute[189279]: 2025-12-10 20:25:16.667 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:25:16 compute-0 nova_compute[189279]: 2025-12-10 20:25:16.668 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:25:16 compute-0 nova_compute[189279]: 2025-12-10 20:25:16.668 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:25:16 compute-0 nova_compute[189279]: 2025-12-10 20:25:16.669 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:25:16 compute-0 nova_compute[189279]: 2025-12-10 20:25:16.674 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:16 compute-0 nova_compute[189279]: 2025-12-10 20:25:16.742 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:25:16 compute-0 nova_compute[189279]: 2025-12-10 20:25:16.843 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:25:16 compute-0 nova_compute[189279]: 2025-12-10 20:25:16.845 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:25:16 compute-0 nova_compute[189279]: 2025-12-10 20:25:16.910 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:25:16 compute-0 nova_compute[189279]: 2025-12-10 20:25:16.917 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.000 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.001 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.101 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.401 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.544 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.545 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4896MB free_disk=72.23250198364258GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.545 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.546 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.697 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.698 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance cc1e9e66-56af-4162-a89f-c97758ee1a64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.698 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.698 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.712 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing inventories for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.779 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating ProviderTree inventory for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.780 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating inventory in ProviderTree for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.797 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing aggregate associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.820 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing trait associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, traits: COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.890 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.910 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.913 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:25:17 compute-0 nova_compute[189279]: 2025-12-10 20:25:17.913 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:25:18 compute-0 podman[255780]: 2025-12-10 20:25:18.164202007 +0000 UTC m=+0.126946811 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 10 20:25:18 compute-0 nova_compute[189279]: 2025-12-10 20:25:18.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:25:18 compute-0 nova_compute[189279]: 2025-12-10 20:25:18.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:25:18 compute-0 nova_compute[189279]: 2025-12-10 20:25:18.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:25:19 compute-0 nova_compute[189279]: 2025-12-10 20:25:19.332 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:25:19 compute-0 nova_compute[189279]: 2025-12-10 20:25:19.332 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:25:19 compute-0 nova_compute[189279]: 2025-12-10 20:25:19.332 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:25:19 compute-0 nova_compute[189279]: 2025-12-10 20:25:19.332 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ca7daa1b-94a2-4e08-902b-73be0ab83974 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:25:21 compute-0 nova_compute[189279]: 2025-12-10 20:25:21.319 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updating instance_info_cache with network_info: [{"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:25:21 compute-0 nova_compute[189279]: 2025-12-10 20:25:21.331 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:25:21 compute-0 nova_compute[189279]: 2025-12-10 20:25:21.332 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:25:21 compute-0 nova_compute[189279]: 2025-12-10 20:25:21.332 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:25:21 compute-0 nova_compute[189279]: 2025-12-10 20:25:21.332 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:25:21 compute-0 nova_compute[189279]: 2025-12-10 20:25:21.333 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:25:21 compute-0 nova_compute[189279]: 2025-12-10 20:25:21.333 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 10 20:25:21 compute-0 nova_compute[189279]: 2025-12-10 20:25:21.676 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:22 compute-0 nova_compute[189279]: 2025-12-10 20:25:22.405 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:25:23.413 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:25:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:25:23.415 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:25:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:25:23.416 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:25:23 compute-0 nova_compute[189279]: 2025-12-10 20:25:23.498 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:25:23 compute-0 nova_compute[189279]: 2025-12-10 20:25:23.533 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:25:26 compute-0 nova_compute[189279]: 2025-12-10 20:25:26.679 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:27 compute-0 nova_compute[189279]: 2025-12-10 20:25:27.407 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:27 compute-0 nova_compute[189279]: 2025-12-10 20:25:27.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:25:27 compute-0 nova_compute[189279]: 2025-12-10 20:25:27.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 10 20:25:27 compute-0 nova_compute[189279]: 2025-12-10 20:25:27.511 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 10 20:25:29 compute-0 podman[203484]: time="2025-12-10T20:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:25:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:25:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Dec 10 20:25:30 compute-0 podman[255800]: 2025-12-10 20:25:30.093423093 +0000 UTC m=+0.076248355 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 20:25:30 compute-0 podman[255801]: 2025-12-10 20:25:30.12150732 +0000 UTC m=+0.092744859 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 10 20:25:31 compute-0 openstack_network_exporter[205632]: ERROR   20:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:25:31 compute-0 openstack_network_exporter[205632]: ERROR   20:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:25:31 compute-0 openstack_network_exporter[205632]: ERROR   20:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:25:31 compute-0 openstack_network_exporter[205632]: ERROR   20:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:25:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:25:31 compute-0 openstack_network_exporter[205632]: ERROR   20:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:25:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:25:31 compute-0 nova_compute[189279]: 2025-12-10 20:25:31.683 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:32 compute-0 nova_compute[189279]: 2025-12-10 20:25:32.410 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:34 compute-0 sshd-session[255844]: Invalid user solv from 80.94.92.184 port 41654
Dec 10 20:25:34 compute-0 sshd-session[255844]: Connection closed by invalid user solv 80.94.92.184 port 41654 [preauth]
Dec 10 20:25:35 compute-0 nova_compute[189279]: 2025-12-10 20:25:35.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:25:36 compute-0 nova_compute[189279]: 2025-12-10 20:25:36.686 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:37 compute-0 nova_compute[189279]: 2025-12-10 20:25:37.412 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:39 compute-0 podman[255846]: 2025-12-10 20:25:39.102209495 +0000 UTC m=+0.074244411 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 10 20:25:39 compute-0 podman[255847]: 2025-12-10 20:25:39.109078011 +0000 UTC m=+0.081911368 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:25:39 compute-0 podman[255848]: 2025-12-10 20:25:39.123020606 +0000 UTC m=+0.089080531 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, release-0.7.12=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, release=1214.1726694543, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 10 20:25:41 compute-0 nova_compute[189279]: 2025-12-10 20:25:41.313 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:25:41 compute-0 nova_compute[189279]: 2025-12-10 20:25:41.339 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Triggering sync for uuid ca7daa1b-94a2-4e08-902b-73be0ab83974 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 10 20:25:41 compute-0 nova_compute[189279]: 2025-12-10 20:25:41.339 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Triggering sync for uuid cc1e9e66-56af-4162-a89f-c97758ee1a64 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 10 20:25:41 compute-0 nova_compute[189279]: 2025-12-10 20:25:41.339 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "ca7daa1b-94a2-4e08-902b-73be0ab83974" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:25:41 compute-0 nova_compute[189279]: 2025-12-10 20:25:41.340 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:25:41 compute-0 nova_compute[189279]: 2025-12-10 20:25:41.340 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "cc1e9e66-56af-4162-a89f-c97758ee1a64" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:25:41 compute-0 nova_compute[189279]: 2025-12-10 20:25:41.341 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:25:41 compute-0 nova_compute[189279]: 2025-12-10 20:25:41.382 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:25:41 compute-0 nova_compute[189279]: 2025-12-10 20:25:41.383 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:25:41 compute-0 nova_compute[189279]: 2025-12-10 20:25:41.690 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.186 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.188 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.188 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.197 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ca7daa1b-94a2-4e08-902b-73be0ab83974', 'name': 'te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ab2dea70-7375-4e2d-beda-90f19a5ec15e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e773c65970c34c9db154c6fea65d9fa4', 'user_id': '639468767e8f48a1bd0e3dac90a0ec47', 'hostId': '1146dff38d7d135d99586b1e56a99cdcdda270d20fcfd89821e17131', 'status': 'active', 'metadata': {'metering.server_group': 'bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.198 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.198 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.199 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa1554590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.203 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cc1e9e66-56af-4162-a89f-c97758ee1a64', 'name': 'te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ab2dea70-7375-4e2d-beda-90f19a5ec15e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e773c65970c34c9db154c6fea65d9fa4', 'user_id': '639468767e8f48a1bd0e3dac90a0ec47', 'hostId': '1146dff38d7d135d99586b1e56a99cdcdda270d20fcfd89821e17131', 'status': 'active', 'metadata': {'metering.server_group': 'bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.203 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.204 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.204 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.204 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.206 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.206 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.206 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.207 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.207 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T20:25:42.204670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.207 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T20:25:42.207196) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.233 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.234 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.256 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.257 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.257 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.257 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.258 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.258 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.258 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.258 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.259 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.259 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.259 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.259 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T20:25:42.258097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.260 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T20:25:42.259126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.263 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.266 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.267 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.267 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.267 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.267 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.267 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.268 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.268 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.269 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T20:25:42.267758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.269 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.269 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.269 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.269 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.269 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.270 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.270 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.270 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.271 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.271 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.271 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.271 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.271 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.272 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.272 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.272 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.273 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.273 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.273 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.273 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.274 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.274 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.275 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T20:25:42.269843) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T20:25:42.271634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T20:25:42.273186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T20:25:42.275115) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.304 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/memory.usage volume: 42.52734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.332 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/memory.usage volume: 42.40625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.333 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.333 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.334 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.334 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.334 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.335 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T20:25:42.334086) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.336 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.336 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.337 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T20:25:42.336105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.337 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.338 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.339 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T20:25:42.338882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.339 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.340 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.340 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.340 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.341 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.341 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T20:25:42.340996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.341 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.341 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.342 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.342 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.343 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T20:25:42.343446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.344 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.344 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.345 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.345 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T20:25:42.346121) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.406 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.bytes volume: 30525952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.406 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 nova_compute[189279]: 2025-12-10 20:25:42.414 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.453 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.454 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.454 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.454 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.455 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.455 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.455 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.455 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/cpu volume: 337050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.455 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/cpu volume: 334490000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.456 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.456 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.456 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.latency volume: 563933312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.456 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.latency volume: 61232129 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.457 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.latency volume: 634633409 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.457 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.latency volume: 60351267 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T20:25:42.455372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T20:25:42.456439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.457 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.457 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.458 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.458 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.458 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.requests volume: 1099 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.458 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.459 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.459 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.459 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.459 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.459 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.459 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.459 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.460 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.460 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T20:25:42.458374) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.460 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T20:25:42.459978) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.461 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.461 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.461 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.461 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.461 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.461 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.462 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.462 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.462 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.462 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.462 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.463 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.463 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.463 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.463 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.463 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.463 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.463 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.464 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.464 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.464 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.464 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.464 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.464 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.464 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.latency volume: 3722115177 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.465 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.465 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.latency volume: 9268528323 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.465 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.466 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.466 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.466 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.466 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.requests volume: 343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.466 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.467 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.requests volume: 340 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T20:25:42.461928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.467 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T20:25:42.463509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T20:25:42.464781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T20:25:42.466380) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.467 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.467 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.467 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.468 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.468 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.468 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.468 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.469 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.469 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:25:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:25:42.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T20:25:42.468138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:25:43 compute-0 podman[255903]: 2025-12-10 20:25:43.086250244 +0000 UTC m=+0.063499422 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 20:25:43 compute-0 podman[255902]: 2025-12-10 20:25:43.143836945 +0000 UTC m=+0.113854189 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 10 20:25:44 compute-0 podman[255944]: 2025-12-10 20:25:44.788943084 +0000 UTC m=+0.108488845 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Dec 10 20:25:46 compute-0 nova_compute[189279]: 2025-12-10 20:25:46.691 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:47 compute-0 nova_compute[189279]: 2025-12-10 20:25:47.416 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:49 compute-0 podman[255967]: 2025-12-10 20:25:49.163015999 +0000 UTC m=+0.132904302 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.schema-version=1.0)
Dec 10 20:25:51 compute-0 nova_compute[189279]: 2025-12-10 20:25:51.694 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:52 compute-0 nova_compute[189279]: 2025-12-10 20:25:52.418 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:56 compute-0 nova_compute[189279]: 2025-12-10 20:25:56.696 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:57 compute-0 nova_compute[189279]: 2025-12-10 20:25:57.422 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:25:59 compute-0 podman[203484]: time="2025-12-10T20:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:25:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:25:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Dec 10 20:26:01 compute-0 podman[255987]: 2025-12-10 20:26:01.100093647 +0000 UTC m=+0.082877634 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 20:26:01 compute-0 podman[255988]: 2025-12-10 20:26:01.149737265 +0000 UTC m=+0.122733658 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 10 20:26:01 compute-0 openstack_network_exporter[205632]: ERROR   20:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:26:01 compute-0 openstack_network_exporter[205632]: ERROR   20:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:26:01 compute-0 openstack_network_exporter[205632]: ERROR   20:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:26:01 compute-0 openstack_network_exporter[205632]: ERROR   20:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:26:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:26:01 compute-0 openstack_network_exporter[205632]: ERROR   20:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:26:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:26:01 compute-0 nova_compute[189279]: 2025-12-10 20:26:01.699 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:02 compute-0 nova_compute[189279]: 2025-12-10 20:26:02.424 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:06 compute-0 nova_compute[189279]: 2025-12-10 20:26:06.702 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:07 compute-0 nova_compute[189279]: 2025-12-10 20:26:07.427 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:07 compute-0 nova_compute[189279]: 2025-12-10 20:26:07.515 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:26:10 compute-0 podman[256033]: 2025-12-10 20:26:10.113611967 +0000 UTC m=+0.079730599 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:26:10 compute-0 podman[256034]: 2025-12-10 20:26:10.12153828 +0000 UTC m=+0.084383054 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:26:10 compute-0 podman[256035]: 2025-12-10 20:26:10.137049898 +0000 UTC m=+0.095594117 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=kepler, distribution-scope=public, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=)
Dec 10 20:26:10 compute-0 nova_compute[189279]: 2025-12-10 20:26:10.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:26:11 compute-0 nova_compute[189279]: 2025-12-10 20:26:11.706 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:12 compute-0 nova_compute[189279]: 2025-12-10 20:26:12.430 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:14 compute-0 podman[256090]: 2025-12-10 20:26:14.094303734 +0000 UTC m=+0.072355330 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 20:26:14 compute-0 podman[256089]: 2025-12-10 20:26:14.133176712 +0000 UTC m=+0.102501803 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Dec 10 20:26:14 compute-0 nova_compute[189279]: 2025-12-10 20:26:14.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:26:14 compute-0 nova_compute[189279]: 2025-12-10 20:26:14.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:26:14 compute-0 nova_compute[189279]: 2025-12-10 20:26:14.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:26:15 compute-0 podman[256130]: 2025-12-10 20:26:15.214760509 +0000 UTC m=+0.170068673 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 10 20:26:16 compute-0 nova_compute[189279]: 2025-12-10 20:26:16.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:26:16 compute-0 nova_compute[189279]: 2025-12-10 20:26:16.510 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:26:16 compute-0 nova_compute[189279]: 2025-12-10 20:26:16.511 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:26:16 compute-0 nova_compute[189279]: 2025-12-10 20:26:16.511 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:26:16 compute-0 nova_compute[189279]: 2025-12-10 20:26:16.511 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:26:16 compute-0 nova_compute[189279]: 2025-12-10 20:26:16.588 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:26:16 compute-0 nova_compute[189279]: 2025-12-10 20:26:16.696 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:26:16 compute-0 nova_compute[189279]: 2025-12-10 20:26:16.698 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:26:16 compute-0 nova_compute[189279]: 2025-12-10 20:26:16.721 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:16 compute-0 nova_compute[189279]: 2025-12-10 20:26:16.775 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:26:16 compute-0 nova_compute[189279]: 2025-12-10 20:26:16.783 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:26:16 compute-0 nova_compute[189279]: 2025-12-10 20:26:16.845 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:26:16 compute-0 nova_compute[189279]: 2025-12-10 20:26:16.846 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:26:16 compute-0 nova_compute[189279]: 2025-12-10 20:26:16.906 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:26:17 compute-0 nova_compute[189279]: 2025-12-10 20:26:17.229 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:26:17 compute-0 nova_compute[189279]: 2025-12-10 20:26:17.231 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4876MB free_disk=72.23250198364258GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:26:17 compute-0 nova_compute[189279]: 2025-12-10 20:26:17.231 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:26:17 compute-0 nova_compute[189279]: 2025-12-10 20:26:17.232 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:26:17 compute-0 nova_compute[189279]: 2025-12-10 20:26:17.312 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:26:17 compute-0 nova_compute[189279]: 2025-12-10 20:26:17.312 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance cc1e9e66-56af-4162-a89f-c97758ee1a64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:26:17 compute-0 nova_compute[189279]: 2025-12-10 20:26:17.313 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:26:17 compute-0 nova_compute[189279]: 2025-12-10 20:26:17.313 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:26:17 compute-0 nova_compute[189279]: 2025-12-10 20:26:17.432 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:17 compute-0 nova_compute[189279]: 2025-12-10 20:26:17.461 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:26:17 compute-0 nova_compute[189279]: 2025-12-10 20:26:17.484 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:26:17 compute-0 nova_compute[189279]: 2025-12-10 20:26:17.485 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:26:17 compute-0 nova_compute[189279]: 2025-12-10 20:26:17.486 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:26:18 compute-0 nova_compute[189279]: 2025-12-10 20:26:18.486 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:26:18 compute-0 nova_compute[189279]: 2025-12-10 20:26:18.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:26:19 compute-0 nova_compute[189279]: 2025-12-10 20:26:19.306 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:26:19 compute-0 nova_compute[189279]: 2025-12-10 20:26:19.306 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:26:19 compute-0 nova_compute[189279]: 2025-12-10 20:26:19.307 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:26:20 compute-0 podman[256169]: 2025-12-10 20:26:20.130131096 +0000 UTC m=+0.101926867 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 10 20:26:21 compute-0 nova_compute[189279]: 2025-12-10 20:26:21.463 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Updating instance_info_cache with network_info: [{"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:26:21 compute-0 nova_compute[189279]: 2025-12-10 20:26:21.488 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:26:21 compute-0 nova_compute[189279]: 2025-12-10 20:26:21.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:26:21 compute-0 nova_compute[189279]: 2025-12-10 20:26:21.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:26:21 compute-0 nova_compute[189279]: 2025-12-10 20:26:21.490 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:26:21 compute-0 nova_compute[189279]: 2025-12-10 20:26:21.726 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:22 compute-0 nova_compute[189279]: 2025-12-10 20:26:22.435 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:26:23.414 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:26:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:26:23.416 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:26:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:26:23.417 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:26:24 compute-0 nova_compute[189279]: 2025-12-10 20:26:24.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:26:26 compute-0 nova_compute[189279]: 2025-12-10 20:26:26.728 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:27 compute-0 nova_compute[189279]: 2025-12-10 20:26:27.437 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:29 compute-0 podman[203484]: time="2025-12-10T20:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:26:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:26:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Dec 10 20:26:31 compute-0 openstack_network_exporter[205632]: ERROR   20:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:26:31 compute-0 openstack_network_exporter[205632]: ERROR   20:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:26:31 compute-0 openstack_network_exporter[205632]: ERROR   20:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:26:31 compute-0 openstack_network_exporter[205632]: ERROR   20:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:26:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:26:31 compute-0 openstack_network_exporter[205632]: ERROR   20:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:26:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:26:31 compute-0 nova_compute[189279]: 2025-12-10 20:26:31.730 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:32 compute-0 podman[256188]: 2025-12-10 20:26:32.112992278 +0000 UTC m=+0.090557571 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 20:26:32 compute-0 podman[256189]: 2025-12-10 20:26:32.125063834 +0000 UTC m=+0.100064637 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, io.openshift.tags=minimal rhel9)
Dec 10 20:26:32 compute-0 nova_compute[189279]: 2025-12-10 20:26:32.440 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:36 compute-0 nova_compute[189279]: 2025-12-10 20:26:36.733 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:37 compute-0 nova_compute[189279]: 2025-12-10 20:26:37.442 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:41 compute-0 podman[256230]: 2025-12-10 20:26:41.139427956 +0000 UTC m=+0.114975129 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:26:41 compute-0 podman[256231]: 2025-12-10 20:26:41.143160977 +0000 UTC m=+0.115772491 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 10 20:26:41 compute-0 podman[256232]: 2025-12-10 20:26:41.157099221 +0000 UTC m=+0.112660295 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.expose-services=, name=ubi9, vendor=Red Hat, Inc.)
Dec 10 20:26:41 compute-0 nova_compute[189279]: 2025-12-10 20:26:41.737 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:42 compute-0 nova_compute[189279]: 2025-12-10 20:26:42.445 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:44 compute-0 podman[256284]: 2025-12-10 20:26:44.806860634 +0000 UTC m=+0.092834531 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:26:44 compute-0 podman[256283]: 2025-12-10 20:26:44.82599352 +0000 UTC m=+0.128659877 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 10 20:26:46 compute-0 podman[256325]: 2025-12-10 20:26:46.199489741 +0000 UTC m=+0.161234074 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 10 20:26:46 compute-0 nova_compute[189279]: 2025-12-10 20:26:46.740 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:47 compute-0 nova_compute[189279]: 2025-12-10 20:26:47.447 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:51 compute-0 podman[256351]: 2025-12-10 20:26:51.097750837 +0000 UTC m=+0.073483560 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, io.buildah.version=1.41.4, org.label-schema.build-date=20251210)
Dec 10 20:26:51 compute-0 nova_compute[189279]: 2025-12-10 20:26:51.743 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:52 compute-0 nova_compute[189279]: 2025-12-10 20:26:52.451 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:56 compute-0 nova_compute[189279]: 2025-12-10 20:26:56.745 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:57 compute-0 nova_compute[189279]: 2025-12-10 20:26:57.453 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:26:59 compute-0 podman[203484]: time="2025-12-10T20:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:26:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:26:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4816 "" "Go-http-client/1.1"
Dec 10 20:27:01 compute-0 openstack_network_exporter[205632]: ERROR   20:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:27:01 compute-0 openstack_network_exporter[205632]: ERROR   20:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:27:01 compute-0 openstack_network_exporter[205632]: ERROR   20:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:27:01 compute-0 openstack_network_exporter[205632]: ERROR   20:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:27:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:27:01 compute-0 openstack_network_exporter[205632]: ERROR   20:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:27:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:27:01 compute-0 nova_compute[189279]: 2025-12-10 20:27:01.751 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:02 compute-0 nova_compute[189279]: 2025-12-10 20:27:02.455 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:03 compute-0 podman[256371]: 2025-12-10 20:27:03.111954444 +0000 UTC m=+0.079936134 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 10 20:27:03 compute-0 podman[256372]: 2025-12-10 20:27:03.164395597 +0000 UTC m=+0.123485458 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 10 20:27:06 compute-0 nova_compute[189279]: 2025-12-10 20:27:06.755 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:07 compute-0 nova_compute[189279]: 2025-12-10 20:27:07.459 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:09 compute-0 nova_compute[189279]: 2025-12-10 20:27:09.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:27:11 compute-0 nova_compute[189279]: 2025-12-10 20:27:11.758 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:12 compute-0 podman[256416]: 2025-12-10 20:27:12.117018305 +0000 UTC m=+0.090950981 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:27:12 compute-0 podman[256417]: 2025-12-10 20:27:12.12126467 +0000 UTC m=+0.086262754 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 10 20:27:12 compute-0 podman[256422]: 2025-12-10 20:27:12.136460279 +0000 UTC m=+0.093336294 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, managed_by=edpm_ansible, release=1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=base rhel9, config_id=edpm, name=ubi9)
Dec 10 20:27:12 compute-0 nova_compute[189279]: 2025-12-10 20:27:12.462 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:12 compute-0 nova_compute[189279]: 2025-12-10 20:27:12.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:27:14 compute-0 nova_compute[189279]: 2025-12-10 20:27:14.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:27:15 compute-0 podman[256472]: 2025-12-10 20:27:15.138452012 +0000 UTC m=+0.094286531 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 20:27:15 compute-0 podman[256471]: 2025-12-10 20:27:15.143347694 +0000 UTC m=+0.099154882 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.529 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.529 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.529 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.529 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.619 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:27:16 compute-0 podman[256513]: 2025-12-10 20:27:16.666171477 +0000 UTC m=+0.145474589 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.702 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.703 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.761 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.764 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.770 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.825 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.828 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:27:16 compute-0 nova_compute[189279]: 2025-12-10 20:27:16.892 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:27:17 compute-0 nova_compute[189279]: 2025-12-10 20:27:17.233 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:27:17 compute-0 nova_compute[189279]: 2025-12-10 20:27:17.237 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4860MB free_disk=72.23250198364258GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:27:17 compute-0 nova_compute[189279]: 2025-12-10 20:27:17.238 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:27:17 compute-0 nova_compute[189279]: 2025-12-10 20:27:17.238 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:27:17 compute-0 nova_compute[189279]: 2025-12-10 20:27:17.317 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:27:17 compute-0 nova_compute[189279]: 2025-12-10 20:27:17.318 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance cc1e9e66-56af-4162-a89f-c97758ee1a64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:27:17 compute-0 nova_compute[189279]: 2025-12-10 20:27:17.319 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:27:17 compute-0 nova_compute[189279]: 2025-12-10 20:27:17.319 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:27:17 compute-0 nova_compute[189279]: 2025-12-10 20:27:17.379 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:27:17 compute-0 nova_compute[189279]: 2025-12-10 20:27:17.395 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:27:17 compute-0 nova_compute[189279]: 2025-12-10 20:27:17.398 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:27:17 compute-0 nova_compute[189279]: 2025-12-10 20:27:17.399 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:27:17 compute-0 nova_compute[189279]: 2025-12-10 20:27:17.464 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:18 compute-0 nova_compute[189279]: 2025-12-10 20:27:18.398 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:27:18 compute-0 nova_compute[189279]: 2025-12-10 20:27:18.399 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:27:18 compute-0 nova_compute[189279]: 2025-12-10 20:27:18.399 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:27:19 compute-0 nova_compute[189279]: 2025-12-10 20:27:19.388 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:27:19 compute-0 nova_compute[189279]: 2025-12-10 20:27:19.388 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:27:19 compute-0 nova_compute[189279]: 2025-12-10 20:27:19.389 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:27:19 compute-0 nova_compute[189279]: 2025-12-10 20:27:19.389 189283 DEBUG nova.objects.instance [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ca7daa1b-94a2-4e08-902b-73be0ab83974 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:27:21 compute-0 nova_compute[189279]: 2025-12-10 20:27:21.763 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:22 compute-0 podman[256551]: 2025-12-10 20:27:22.148260073 +0000 UTC m=+0.119726497 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251210)
Dec 10 20:27:22 compute-0 nova_compute[189279]: 2025-12-10 20:27:22.394 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updating instance_info_cache with network_info: [{"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:27:22 compute-0 nova_compute[189279]: 2025-12-10 20:27:22.408 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-ca7daa1b-94a2-4e08-902b-73be0ab83974" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:27:22 compute-0 nova_compute[189279]: 2025-12-10 20:27:22.408 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:27:22 compute-0 nova_compute[189279]: 2025-12-10 20:27:22.409 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:27:22 compute-0 nova_compute[189279]: 2025-12-10 20:27:22.409 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:27:22 compute-0 nova_compute[189279]: 2025-12-10 20:27:22.466 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:27:23.418 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:27:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:27:23.419 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:27:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:27:23.420 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:27:24 compute-0 nova_compute[189279]: 2025-12-10 20:27:24.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:27:26 compute-0 nova_compute[189279]: 2025-12-10 20:27:26.766 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:27 compute-0 nova_compute[189279]: 2025-12-10 20:27:27.469 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:27 compute-0 nova_compute[189279]: 2025-12-10 20:27:27.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:27:29 compute-0 podman[203484]: time="2025-12-10T20:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:27:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:27:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4818 "" "Go-http-client/1.1"
Dec 10 20:27:31 compute-0 openstack_network_exporter[205632]: ERROR   20:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:27:31 compute-0 openstack_network_exporter[205632]: ERROR   20:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:27:31 compute-0 openstack_network_exporter[205632]: ERROR   20:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:27:31 compute-0 openstack_network_exporter[205632]: ERROR   20:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:27:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:27:31 compute-0 openstack_network_exporter[205632]: ERROR   20:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:27:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:27:31 compute-0 nova_compute[189279]: 2025-12-10 20:27:31.769 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:32 compute-0 nova_compute[189279]: 2025-12-10 20:27:32.471 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:34 compute-0 podman[256571]: 2025-12-10 20:27:34.124913978 +0000 UTC m=+0.100538150 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 20:27:34 compute-0 podman[256570]: 2025-12-10 20:27:34.149813648 +0000 UTC m=+0.118788921 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:27:36 compute-0 nova_compute[189279]: 2025-12-10 20:27:36.771 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:37 compute-0 nova_compute[189279]: 2025-12-10 20:27:37.474 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:41 compute-0 nova_compute[189279]: 2025-12-10 20:27:41.773 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.188 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.189 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.189 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.190 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.195 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.197 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.198 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.198 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.198 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.199 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.199 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.199 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ca7daa1b-94a2-4e08-902b-73be0ab83974', 'name': 'te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ab2dea70-7375-4e2d-beda-90f19a5ec15e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e773c65970c34c9db154c6fea65d9fa4', 'user_id': '639468767e8f48a1bd0e3dac90a0ec47', 'hostId': '1146dff38d7d135d99586b1e56a99cdcdda270d20fcfd89821e17131', 'status': 'active', 'metadata': {'metering.server_group': 'bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.206 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cc1e9e66-56af-4162-a89f-c97758ee1a64', 'name': 'te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq', 'flavor': {'id': 'e8e609a5-dadd-40c2-ac6f-6fceb4ec15f4', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ab2dea70-7375-4e2d-beda-90f19a5ec15e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e773c65970c34c9db154c6fea65d9fa4', 'user_id': '639468767e8f48a1bd0e3dac90a0ec47', 'hostId': '1146dff38d7d135d99586b1e56a99cdcdda270d20fcfd89821e17131', 'status': 'active', 'metadata': {'metering.server_group': 'bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.206 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.206 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.207 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.207 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-10T20:27:42.207174) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.208 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.208 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.208 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.209 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.209 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.209 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-10T20:27:42.209258) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.230 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.231 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.250 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.250 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.251 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.251 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.251 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.251 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.251 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.252 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.252 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.252 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.252 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.253 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.253 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.253 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.254 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-10T20:27:42.252018) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.254 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-10T20:27:42.253269) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.258 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.263 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.264 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.264 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.264 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.264 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.264 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.265 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.265 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.265 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.266 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.266 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-10T20:27:42.264944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.267 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.267 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.267 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-10T20:27:42.267095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.268 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.268 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.268 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.268 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.269 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.269 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.269 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.269 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.270 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.270 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.270 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.271 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.271 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.271 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.272 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.272 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.272 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-10T20:27:42.269168) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.273 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-10T20:27:42.271008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.273 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.273 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-10T20:27:42.273093) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.314 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/memory.usage volume: 42.52734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.339 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/memory.usage volume: 42.40625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.339 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.340 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.340 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.340 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.341 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.341 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.341 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.342 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.342 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.343 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-10T20:27:42.341097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.343 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.344 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.344 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.345 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.346 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-10T20:27:42.343476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.346 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.347 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.347 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.348 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.348 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.348 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.348 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.348 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.348 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.349 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.349 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.349 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.349 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.349 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.350 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.350 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.350 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.351 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.351 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.351 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-10T20:27:42.346521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-10T20:27:42.348312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-10T20:27:42.349932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-10T20:27:42.351429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.409 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.bytes volume: 30525952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.410 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 nova_compute[189279]: 2025-12-10 20:27:42.478 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.479 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.480 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.482 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.482 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.483 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-10T20:27:42.483119) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.483 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/cpu volume: 338590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.484 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/cpu volume: 336030000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.485 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.485 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.486 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.486 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.486 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.latency volume: 563933312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.487 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.latency volume: 61232129 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.487 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-10T20:27:42.486348) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.487 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.latency volume: 634633409 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.488 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.latency volume: 60351267 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.489 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.489 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.490 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.490 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.490 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.491 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-10T20:27:42.490941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.491 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.requests volume: 1099 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.492 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.492 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.493 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.494 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.494 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.494 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.494 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.495 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.495 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-10T20:27:42.495041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.495 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.496 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.496 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.497 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.498 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.499 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.499 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.500 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-10T20:27:42.499373) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.500 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.500 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.501 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.501 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.501 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.501 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.502 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.502 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.503 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.latency volume: 3722115177 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.503 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.503 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.latency volume: 9268528323 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-10T20:27:42.501751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.503 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-10T20:27:42.502905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.504 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.504 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.requests volume: 343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.504 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.505 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.requests volume: 340 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.505 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.505 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-10T20:27:42.504566) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.506 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.506 14 DEBUG ceilometer.compute.pollsters [-] ca7daa1b-94a2-4e08-902b-73be0ab83974/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.506 14 DEBUG ceilometer.compute.pollsters [-] cc1e9e66-56af-4162-a89f-c97758ee1a64/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-10T20:27:42.506311) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.507 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.507 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.507 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.508 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.509 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.510 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.510 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.510 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.510 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.510 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:27:42.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:27:43 compute-0 podman[256620]: 2025-12-10 20:27:43.108658225 +0000 UTC m=+0.079282338 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:27:43 compute-0 podman[256619]: 2025-12-10 20:27:43.125672213 +0000 UTC m=+0.095565065 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 10 20:27:43 compute-0 podman[256621]: 2025-12-10 20:27:43.143897804 +0000 UTC m=+0.103521230 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.buildah.version=1.29.0, release-0.7.12=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, container_name=kepler, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 20:27:46 compute-0 podman[256676]: 2025-12-10 20:27:46.129049752 +0000 UTC m=+0.097993481 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd)
Dec 10 20:27:46 compute-0 podman[256677]: 2025-12-10 20:27:46.144838187 +0000 UTC m=+0.106717975 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 20:27:46 compute-0 nova_compute[189279]: 2025-12-10 20:27:46.777 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:47 compute-0 podman[256717]: 2025-12-10 20:27:47.175645827 +0000 UTC m=+0.147245527 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:27:47 compute-0 nova_compute[189279]: 2025-12-10 20:27:47.479 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:51 compute-0 nova_compute[189279]: 2025-12-10 20:27:51.780 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:52 compute-0 nova_compute[189279]: 2025-12-10 20:27:52.482 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:53 compute-0 podman[256743]: 2025-12-10 20:27:53.151040001 +0000 UTC m=+0.124226267 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251210, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 10 20:27:56 compute-0 nova_compute[189279]: 2025-12-10 20:27:56.784 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:57 compute-0 nova_compute[189279]: 2025-12-10 20:27:57.486 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:27:59 compute-0 podman[203484]: time="2025-12-10T20:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:27:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:27:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4819 "" "Go-http-client/1.1"
Dec 10 20:28:01 compute-0 openstack_network_exporter[205632]: ERROR   20:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:28:01 compute-0 openstack_network_exporter[205632]: ERROR   20:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:28:01 compute-0 openstack_network_exporter[205632]: ERROR   20:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:28:01 compute-0 openstack_network_exporter[205632]: ERROR   20:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:28:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:28:01 compute-0 openstack_network_exporter[205632]: ERROR   20:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:28:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:28:01 compute-0 nova_compute[189279]: 2025-12-10 20:28:01.788 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:02 compute-0 nova_compute[189279]: 2025-12-10 20:28:02.488 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:05 compute-0 podman[256762]: 2025-12-10 20:28:05.120725108 +0000 UTC m=+0.085378931 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 10 20:28:05 compute-0 podman[256761]: 2025-12-10 20:28:05.146905113 +0000 UTC m=+0.113031646 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:28:06 compute-0 nova_compute[189279]: 2025-12-10 20:28:06.792 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:07 compute-0 nova_compute[189279]: 2025-12-10 20:28:07.491 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:09 compute-0 nova_compute[189279]: 2025-12-10 20:28:09.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:28:11 compute-0 nova_compute[189279]: 2025-12-10 20:28:11.796 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:12 compute-0 nova_compute[189279]: 2025-12-10 20:28:12.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:28:12 compute-0 nova_compute[189279]: 2025-12-10 20:28:12.493 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:14 compute-0 podman[256802]: 2025-12-10 20:28:14.096518791 +0000 UTC m=+0.073797919 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:28:14 compute-0 podman[256803]: 2025-12-10 20:28:14.109497631 +0000 UTC m=+0.086144462 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS)
Dec 10 20:28:14 compute-0 podman[256804]: 2025-12-10 20:28:14.155648364 +0000 UTC m=+0.118134023 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, managed_by=edpm_ansible, distribution-scope=public, container_name=kepler, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 10 20:28:15 compute-0 nova_compute[189279]: 2025-12-10 20:28:15.484 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:28:16 compute-0 nova_compute[189279]: 2025-12-10 20:28:16.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:28:16 compute-0 nova_compute[189279]: 2025-12-10 20:28:16.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:28:16 compute-0 nova_compute[189279]: 2025-12-10 20:28:16.800 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:17 compute-0 podman[256861]: 2025-12-10 20:28:17.089259924 +0000 UTC m=+0.065033883 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 20:28:17 compute-0 podman[256860]: 2025-12-10 20:28:17.096723245 +0000 UTC m=+0.072556025 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:28:17 compute-0 nova_compute[189279]: 2025-12-10 20:28:17.496 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:18 compute-0 podman[256902]: 2025-12-10 20:28:18.120770142 +0000 UTC m=+0.101208217 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 10 20:28:18 compute-0 nova_compute[189279]: 2025-12-10 20:28:18.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:28:18 compute-0 nova_compute[189279]: 2025-12-10 20:28:18.512 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:28:18 compute-0 nova_compute[189279]: 2025-12-10 20:28:18.513 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:28:18 compute-0 nova_compute[189279]: 2025-12-10 20:28:18.513 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:28:18 compute-0 nova_compute[189279]: 2025-12-10 20:28:18.513 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:28:18 compute-0 nova_compute[189279]: 2025-12-10 20:28:18.580 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:28:18 compute-0 nova_compute[189279]: 2025-12-10 20:28:18.658 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:28:18 compute-0 nova_compute[189279]: 2025-12-10 20:28:18.661 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:28:18 compute-0 nova_compute[189279]: 2025-12-10 20:28:18.723 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:28:18 compute-0 nova_compute[189279]: 2025-12-10 20:28:18.734 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:28:18 compute-0 nova_compute[189279]: 2025-12-10 20:28:18.799 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:28:18 compute-0 nova_compute[189279]: 2025-12-10 20:28:18.801 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 10 20:28:18 compute-0 nova_compute[189279]: 2025-12-10 20:28:18.861 189283 DEBUG oslo_concurrency.processutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 10 20:28:19 compute-0 nova_compute[189279]: 2025-12-10 20:28:19.170 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:28:19 compute-0 nova_compute[189279]: 2025-12-10 20:28:19.171 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4891MB free_disk=72.23250198364258GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:28:19 compute-0 nova_compute[189279]: 2025-12-10 20:28:19.172 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:28:19 compute-0 nova_compute[189279]: 2025-12-10 20:28:19.172 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:28:19 compute-0 nova_compute[189279]: 2025-12-10 20:28:19.257 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance ca7daa1b-94a2-4e08-902b-73be0ab83974 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:28:19 compute-0 nova_compute[189279]: 2025-12-10 20:28:19.257 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Instance cc1e9e66-56af-4162-a89f-c97758ee1a64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 10 20:28:19 compute-0 nova_compute[189279]: 2025-12-10 20:28:19.258 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:28:19 compute-0 nova_compute[189279]: 2025-12-10 20:28:19.258 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:28:19 compute-0 nova_compute[189279]: 2025-12-10 20:28:19.310 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:28:19 compute-0 nova_compute[189279]: 2025-12-10 20:28:19.331 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:28:19 compute-0 nova_compute[189279]: 2025-12-10 20:28:19.332 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:28:19 compute-0 nova_compute[189279]: 2025-12-10 20:28:19.332 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:28:20 compute-0 nova_compute[189279]: 2025-12-10 20:28:20.333 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:28:20 compute-0 nova_compute[189279]: 2025-12-10 20:28:20.335 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:28:20 compute-0 nova_compute[189279]: 2025-12-10 20:28:20.762 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 10 20:28:20 compute-0 nova_compute[189279]: 2025-12-10 20:28:20.763 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquired lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 10 20:28:20 compute-0 nova_compute[189279]: 2025-12-10 20:28:20.763 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 10 20:28:21 compute-0 nova_compute[189279]: 2025-12-10 20:28:21.802 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:22 compute-0 nova_compute[189279]: 2025-12-10 20:28:22.497 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:22 compute-0 nova_compute[189279]: 2025-12-10 20:28:22.713 189283 DEBUG nova.network.neutron [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Updating instance_info_cache with network_info: [{"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:28:22 compute-0 nova_compute[189279]: 2025-12-10 20:28:22.778 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Releasing lock "refresh_cache-cc1e9e66-56af-4162-a89f-c97758ee1a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 10 20:28:22 compute-0 nova_compute[189279]: 2025-12-10 20:28:22.779 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 10 20:28:22 compute-0 nova_compute[189279]: 2025-12-10 20:28:22.780 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:28:22 compute-0 nova_compute[189279]: 2025-12-10 20:28:22.780 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:28:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:23.419 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:28:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:23.420 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:28:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:23.420 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:28:24 compute-0 podman[256941]: 2025-12-10 20:28:24.122859586 +0000 UTC m=+0.096787259 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 10 20:28:25 compute-0 nova_compute[189279]: 2025-12-10 20:28:25.489 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:28:26 compute-0 nova_compute[189279]: 2025-12-10 20:28:26.805 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:27 compute-0 nova_compute[189279]: 2025-12-10 20:28:27.501 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:29 compute-0 podman[203484]: time="2025-12-10T20:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:28:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:28:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4815 "" "Go-http-client/1.1"
Dec 10 20:28:31 compute-0 openstack_network_exporter[205632]: ERROR   20:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:28:31 compute-0 openstack_network_exporter[205632]: ERROR   20:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:28:31 compute-0 openstack_network_exporter[205632]: ERROR   20:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:28:31 compute-0 openstack_network_exporter[205632]: ERROR   20:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:28:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:28:31 compute-0 openstack_network_exporter[205632]: ERROR   20:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:28:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:28:31 compute-0 nova_compute[189279]: 2025-12-10 20:28:31.808 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:32 compute-0 nova_compute[189279]: 2025-12-10 20:28:32.505 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:36 compute-0 podman[256963]: 2025-12-10 20:28:36.122073338 +0000 UTC m=+0.098907386 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:28:36 compute-0 podman[256964]: 2025-12-10 20:28:36.160870734 +0000 UTC m=+0.131706510 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41)
Dec 10 20:28:36 compute-0 sshd-session[256961]: Invalid user solv from 80.94.92.184 port 44084
Dec 10 20:28:36 compute-0 sshd-session[256961]: Connection closed by invalid user solv 80.94.92.184 port 44084 [preauth]
Dec 10 20:28:36 compute-0 nova_compute[189279]: 2025-12-10 20:28:36.811 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:37 compute-0 nova_compute[189279]: 2025-12-10 20:28:37.507 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:41 compute-0 nova_compute[189279]: 2025-12-10 20:28:41.815 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:42 compute-0 nova_compute[189279]: 2025-12-10 20:28:42.509 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:44 compute-0 podman[257004]: 2025-12-10 20:28:44.734979005 +0000 UTC m=+0.065827964 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 10 20:28:44 compute-0 podman[257005]: 2025-12-10 20:28:44.743308579 +0000 UTC m=+0.072392200 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 10 20:28:44 compute-0 podman[257006]: 2025-12-10 20:28:44.750760611 +0000 UTC m=+0.072278259 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=edpm, version=9.4, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 10 20:28:46 compute-0 nova_compute[189279]: 2025-12-10 20:28:46.819 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:47 compute-0 nova_compute[189279]: 2025-12-10 20:28:47.511 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:48 compute-0 podman[257062]: 2025-12-10 20:28:48.161270117 +0000 UTC m=+0.115121322 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 10 20:28:48 compute-0 podman[257061]: 2025-12-10 20:28:48.206417244 +0000 UTC m=+0.169079446 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 10 20:28:48 compute-0 podman[257099]: 2025-12-10 20:28:48.348342827 +0000 UTC m=+0.149530219 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 10 20:28:51 compute-0 nova_compute[189279]: 2025-12-10 20:28:51.823 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:52 compute-0 nova_compute[189279]: 2025-12-10 20:28:52.513 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:54 compute-0 nova_compute[189279]: 2025-12-10 20:28:54.683 189283 DEBUG oslo_concurrency.lockutils [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "ca7daa1b-94a2-4e08-902b-73be0ab83974" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:28:54 compute-0 nova_compute[189279]: 2025-12-10 20:28:54.684 189283 DEBUG oslo_concurrency.lockutils [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:28:54 compute-0 nova_compute[189279]: 2025-12-10 20:28:54.685 189283 DEBUG oslo_concurrency.lockutils [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:28:54 compute-0 nova_compute[189279]: 2025-12-10 20:28:54.686 189283 DEBUG oslo_concurrency.lockutils [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:28:54 compute-0 nova_compute[189279]: 2025-12-10 20:28:54.686 189283 DEBUG oslo_concurrency.lockutils [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:28:54 compute-0 nova_compute[189279]: 2025-12-10 20:28:54.689 189283 INFO nova.compute.manager [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Terminating instance
Dec 10 20:28:54 compute-0 nova_compute[189279]: 2025-12-10 20:28:54.691 189283 DEBUG nova.compute.manager [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:28:54 compute-0 kernel: tap809bdeda-a7 (unregistering): left promiscuous mode
Dec 10 20:28:54 compute-0 NetworkManager[56238]: <info>  [1765398534.8911] device (tap809bdeda-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:28:54 compute-0 ovn_controller[97701]: 2025-12-10T20:28:54Z|00221|binding|INFO|Releasing lport 809bdeda-a71c-4370-a746-873e31aa580c from this chassis (sb_readonly=0)
Dec 10 20:28:54 compute-0 nova_compute[189279]: 2025-12-10 20:28:54.899 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:54 compute-0 ovn_controller[97701]: 2025-12-10T20:28:54Z|00222|binding|INFO|Setting lport 809bdeda-a71c-4370-a746-873e31aa580c down in Southbound
Dec 10 20:28:54 compute-0 ovn_controller[97701]: 2025-12-10T20:28:54Z|00223|binding|INFO|Removing iface tap809bdeda-a7 ovn-installed in OVS
Dec 10 20:28:54 compute-0 nova_compute[189279]: 2025-12-10 20:28:54.906 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:54.913 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:fb:da 10.100.1.68'], port_security=['fa:16:3e:9b:fb:da 10.100.1.68'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.68/16', 'neutron:device_id': 'ca7daa1b-94a2-4e08-902b-73be0ab83974', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5861e367-6dd6-4128-97c5-6a0449548387', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e773c65970c34c9db154c6fea65d9fa4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '423352dd-9d4c-474d-a8f0-1199c6062876', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=742d4e89-613f-49d1-83dc-36d4a9402367, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=809bdeda-a71c-4370-a746-873e31aa580c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:28:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:54.915 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 809bdeda-a71c-4370-a746-873e31aa580c in datapath 5861e367-6dd6-4128-97c5-6a0449548387 unbound from our chassis
Dec 10 20:28:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:54.918 106564 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5861e367-6dd6-4128-97c5-6a0449548387
Dec 10 20:28:54 compute-0 nova_compute[189279]: 2025-12-10 20:28:54.930 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:54.960 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[5c10dabd-de7b-4ee9-9fe1-a7da057f2da0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:28:54 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Dec 10 20:28:54 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 7min 17.793s CPU time.
Dec 10 20:28:54 compute-0 systemd-machined[155642]: Machine qemu-15-instance-0000000e terminated.
Dec 10 20:28:54 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:54.998 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[96008948-836b-4a88-a1d5-cfaffd2faef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:28:55 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:55.001 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9d6443-f575-41b2-b75b-f3d2f5a29940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:28:55 compute-0 podman[257132]: 2025-12-10 20:28:55.030534522 +0000 UTC m=+0.101333151 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251210, org.label-schema.schema-version=1.0)
Dec 10 20:28:55 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:55.030 239437 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba7abf3-ede8-4806-b47b-ca37b47949b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:28:55 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:55.051 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[d11b9292-5250-4877-ae28-ed4bb4e353a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5861e367-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:88:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499821, 'reachable_time': 43903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257161, 'error': None, 'target': 'ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:28:55 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:55.074 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[0064ba9f-b4b6-484a-b6d7-5836d68f3ee2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5861e367-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499838, 'tstamp': 499838}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257162, 'error': None, 'target': 'ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap5861e367-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499844, 'tstamp': 499844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257162, 'error': None, 'target': 'ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:28:55 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:55.076 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5861e367-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.078 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:55 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:55.084 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5861e367-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.084 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:55 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:55.084 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:28:55 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:55.085 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5861e367-60, col_values=(('external_ids', {'iface-id': 'eedd7beb-1e55-4b8d-a932-7d0592d2e98a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:28:55 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:55.085 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.118 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.124 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.162 189283 INFO nova.virt.libvirt.driver [-] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Instance destroyed successfully.
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.163 189283 DEBUG nova.objects.instance [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lazy-loading 'resources' on Instance uuid ca7daa1b-94a2-4e08-902b-73be0ab83974 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.180 189283 DEBUG nova.virt.libvirt.vif [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9274211-asg-3wo7pgzcqjfb-kz5rrnmehbue-iubf4hp3lb7r',id=14,image_ref='ab2dea70-7375-4e2d-beda-90f19a5ec15e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-10T20:14:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e773c65970c34c9db154c6fea65d9fa4',ramdisk_id='',reservation_id='r-fd9mp2qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ab2dea70-7375-4e2d-beda-90f19a5ec15e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1355872434',owner_user_name='tempest-PrometheusGabbiTest-1355872434-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T20:14:30Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='639468767e8f48a1bd0e3dac90a0ec47',uuid=ca7daa1b-94a2-4e08-902b-73be0ab83974,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.181 189283 DEBUG nova.network.os_vif_util [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Converting VIF {"id": "809bdeda-a71c-4370-a746-873e31aa580c", "address": "fa:16:3e:9b:fb:da", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap809bdeda-a7", "ovs_interfaceid": "809bdeda-a71c-4370-a746-873e31aa580c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.181 189283 DEBUG nova.network.os_vif_util [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:fb:da,bridge_name='br-int',has_traffic_filtering=True,id=809bdeda-a71c-4370-a746-873e31aa580c,network=Network(5861e367-6dd6-4128-97c5-6a0449548387),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap809bdeda-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.182 189283 DEBUG os_vif [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:fb:da,bridge_name='br-int',has_traffic_filtering=True,id=809bdeda-a71c-4370-a746-873e31aa580c,network=Network(5861e367-6dd6-4128-97c5-6a0449548387),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap809bdeda-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.183 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.183 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap809bdeda-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.185 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.187 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.189 189283 INFO os_vif [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:fb:da,bridge_name='br-int',has_traffic_filtering=True,id=809bdeda-a71c-4370-a746-873e31aa580c,network=Network(5861e367-6dd6-4128-97c5-6a0449548387),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap809bdeda-a7')
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.190 189283 INFO nova.virt.libvirt.driver [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Deleting instance files /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974_del
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.190 189283 INFO nova.virt.libvirt.driver [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Deletion of /var/lib/nova/instances/ca7daa1b-94a2-4e08-902b-73be0ab83974_del complete
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.266 189283 INFO nova.compute.manager [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Took 0.57 seconds to destroy the instance on the hypervisor.
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.267 189283 DEBUG oslo.service.loopingcall [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.267 189283 DEBUG nova.compute.manager [-] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.268 189283 DEBUG nova.network.neutron [-] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.352 189283 DEBUG nova.compute.manager [req-c3e62aac-fe60-48e2-8034-28023df9da45 req-1f0987db-7602-4806-ad04-fe55796cc386 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Received event network-vif-unplugged-809bdeda-a71c-4370-a746-873e31aa580c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.353 189283 DEBUG oslo_concurrency.lockutils [req-c3e62aac-fe60-48e2-8034-28023df9da45 req-1f0987db-7602-4806-ad04-fe55796cc386 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.353 189283 DEBUG oslo_concurrency.lockutils [req-c3e62aac-fe60-48e2-8034-28023df9da45 req-1f0987db-7602-4806-ad04-fe55796cc386 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.353 189283 DEBUG oslo_concurrency.lockutils [req-c3e62aac-fe60-48e2-8034-28023df9da45 req-1f0987db-7602-4806-ad04-fe55796cc386 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.354 189283 DEBUG nova.compute.manager [req-c3e62aac-fe60-48e2-8034-28023df9da45 req-1f0987db-7602-4806-ad04-fe55796cc386 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] No waiting events found dispatching network-vif-unplugged-809bdeda-a71c-4370-a746-873e31aa580c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.354 189283 DEBUG nova.compute.manager [req-c3e62aac-fe60-48e2-8034-28023df9da45 req-1f0987db-7602-4806-ad04-fe55796cc386 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Received event network-vif-unplugged-809bdeda-a71c-4370-a746-873e31aa580c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 10 20:28:55 compute-0 nova_compute[189279]: 2025-12-10 20:28:55.396 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:55 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:55.396 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5a:3b:5c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:7c:4a:59:63:97'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:28:55 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:28:55.397 106564 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.517 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.634 189283 DEBUG nova.network.neutron [-] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.667 189283 INFO nova.compute.manager [-] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Took 2.40 seconds to deallocate network for instance.
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.701 189283 DEBUG nova.compute.manager [req-0920b6bd-99cf-4a46-a61d-57ef3034e6d6 req-52c788a3-7cf4-49df-b85b-4ddb30141c05 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Received event network-vif-plugged-809bdeda-a71c-4370-a746-873e31aa580c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.702 189283 DEBUG oslo_concurrency.lockutils [req-0920b6bd-99cf-4a46-a61d-57ef3034e6d6 req-52c788a3-7cf4-49df-b85b-4ddb30141c05 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.702 189283 DEBUG oslo_concurrency.lockutils [req-0920b6bd-99cf-4a46-a61d-57ef3034e6d6 req-52c788a3-7cf4-49df-b85b-4ddb30141c05 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.702 189283 DEBUG oslo_concurrency.lockutils [req-0920b6bd-99cf-4a46-a61d-57ef3034e6d6 req-52c788a3-7cf4-49df-b85b-4ddb30141c05 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.702 189283 DEBUG nova.compute.manager [req-0920b6bd-99cf-4a46-a61d-57ef3034e6d6 req-52c788a3-7cf4-49df-b85b-4ddb30141c05 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] No waiting events found dispatching network-vif-plugged-809bdeda-a71c-4370-a746-873e31aa580c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.703 189283 WARNING nova.compute.manager [req-0920b6bd-99cf-4a46-a61d-57ef3034e6d6 req-52c788a3-7cf4-49df-b85b-4ddb30141c05 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Received unexpected event network-vif-plugged-809bdeda-a71c-4370-a746-873e31aa580c for instance with vm_state active and task_state deleting.
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.720 189283 DEBUG oslo_concurrency.lockutils [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.721 189283 DEBUG oslo_concurrency.lockutils [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.796 189283 DEBUG nova.compute.provider_tree [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.816 189283 DEBUG nova.scheduler.client.report [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.833 189283 DEBUG oslo_concurrency.lockutils [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.855 189283 INFO nova.scheduler.client.report [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Deleted allocations for instance ca7daa1b-94a2-4e08-902b-73be0ab83974
Dec 10 20:28:57 compute-0 nova_compute[189279]: 2025-12-10 20:28:57.920 189283 DEBUG oslo_concurrency.lockutils [None req-000c1eb8-62a9-488a-afcf-cd6640636a69 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "ca7daa1b-94a2-4e08-902b-73be0ab83974" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:28:59 compute-0 podman[203484]: time="2025-12-10T20:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:28:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 10 20:28:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec 10 20:28:59 compute-0 nova_compute[189279]: 2025-12-10 20:28:59.780 189283 DEBUG nova.compute.manager [req-a7054730-81d0-4969-9651-81ae176d488c req-defae1a1-662f-4540-96d0-5b25d58e0b74 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Received event network-vif-deleted-809bdeda-a71c-4370-a746-873e31aa580c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:29:00 compute-0 nova_compute[189279]: 2025-12-10 20:29:00.187 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:01 compute-0 openstack_network_exporter[205632]: ERROR   20:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:29:01 compute-0 openstack_network_exporter[205632]: ERROR   20:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:29:01 compute-0 openstack_network_exporter[205632]: ERROR   20:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:29:01 compute-0 openstack_network_exporter[205632]: ERROR   20:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:29:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:29:01 compute-0 openstack_network_exporter[205632]: ERROR   20:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:29:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:29:02 compute-0 nova_compute[189279]: 2025-12-10 20:29:02.519 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:03 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:03.399 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7a61d05e-c2a0-4dab-a5da-4bf5a29b3ff7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.190 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.260 189283 DEBUG oslo_concurrency.lockutils [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "cc1e9e66-56af-4162-a89f-c97758ee1a64" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.261 189283 DEBUG oslo_concurrency.lockutils [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.261 189283 DEBUG oslo_concurrency.lockutils [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.261 189283 DEBUG oslo_concurrency.lockutils [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.262 189283 DEBUG oslo_concurrency.lockutils [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.263 189283 INFO nova.compute.manager [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Terminating instance
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.264 189283 DEBUG nova.compute.manager [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 10 20:29:05 compute-0 kernel: tap191db221-f5 (unregistering): left promiscuous mode
Dec 10 20:29:05 compute-0 NetworkManager[56238]: <info>  [1765398545.3020] device (tap191db221-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.311 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:05 compute-0 ovn_controller[97701]: 2025-12-10T20:29:05Z|00224|binding|INFO|Releasing lport 191db221-f5ea-4b4e-aa90-70dca09235b1 from this chassis (sb_readonly=0)
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.313 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:05 compute-0 ovn_controller[97701]: 2025-12-10T20:29:05Z|00225|binding|INFO|Setting lport 191db221-f5ea-4b4e-aa90-70dca09235b1 down in Southbound
Dec 10 20:29:05 compute-0 ovn_controller[97701]: 2025-12-10T20:29:05Z|00226|binding|INFO|Removing iface tap191db221-f5 ovn-installed in OVS
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.317 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:05.325 106564 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:91:03 10.100.1.212'], port_security=['fa:16:3e:fb:91:03 10.100.1.212'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.212/16', 'neutron:device_id': 'cc1e9e66-56af-4162-a89f-c97758ee1a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5861e367-6dd6-4128-97c5-6a0449548387', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e773c65970c34c9db154c6fea65d9fa4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '423352dd-9d4c-474d-a8f0-1199c6062876', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=742d4e89-613f-49d1-83dc-36d4a9402367, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>], logical_port=191db221-f5ea-4b4e-aa90-70dca09235b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f97ccbf56a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 10 20:29:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:05.326 106564 INFO neutron.agent.ovn.metadata.agent [-] Port 191db221-f5ea-4b4e-aa90-70dca09235b1 in datapath 5861e367-6dd6-4128-97c5-6a0449548387 unbound from our chassis
Dec 10 20:29:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:05.327 106564 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5861e367-6dd6-4128-97c5-6a0449548387, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 10 20:29:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:05.329 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[0276563d-bf0c-4f3a-b753-48d84ee07356]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:29:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:05.329 106564 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387 namespace which is not needed anymore
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.336 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:05 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000010.scope: Deactivated successfully.
Dec 10 20:29:05 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000010.scope: Consumed 6min 42.012s CPU time.
Dec 10 20:29:05 compute-0 systemd-machined[155642]: Machine qemu-17-instance-00000010 terminated.
Dec 10 20:29:05 compute-0 neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387[251159]: [NOTICE]   (251179) : haproxy version is 2.8.14-c23fe91
Dec 10 20:29:05 compute-0 neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387[251159]: [NOTICE]   (251179) : path to executable is /usr/sbin/haproxy
Dec 10 20:29:05 compute-0 neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387[251159]: [WARNING]  (251179) : Exiting Master process...
Dec 10 20:29:05 compute-0 neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387[251159]: [WARNING]  (251179) : Exiting Master process...
Dec 10 20:29:05 compute-0 neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387[251159]: [ALERT]    (251179) : Current worker (251182) exited with code 143 (Terminated)
Dec 10 20:29:05 compute-0 neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387[251159]: [WARNING]  (251179) : All workers exited. Exiting... (0)
Dec 10 20:29:05 compute-0 systemd[1]: libpod-044e6b6451c9351fa67996bfc04bdb41690998f783eaccb280189ffde47ae5f1.scope: Deactivated successfully.
Dec 10 20:29:05 compute-0 podman[257205]: 2025-12-10 20:29:05.504063948 +0000 UTC m=+0.060016348 container died 044e6b6451c9351fa67996bfc04bdb41690998f783eaccb280189ffde47ae5f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.528 189283 INFO nova.virt.libvirt.driver [-] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Instance destroyed successfully.
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.529 189283 DEBUG nova.objects.instance [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lazy-loading 'resources' on Instance uuid cc1e9e66-56af-4162-a89f-c97758ee1a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 10 20:29:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-044e6b6451c9351fa67996bfc04bdb41690998f783eaccb280189ffde47ae5f1-userdata-shm.mount: Deactivated successfully.
Dec 10 20:29:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f19d9126430d3f3dc75a85cebc7f2afa5ecc017e265d7746a85832384e5896c3-merged.mount: Deactivated successfully.
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.554 189283 DEBUG nova.virt.libvirt.vif [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-10T20:18:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9274211-asg-3wo7pgzcqjfb-lhnbylfvcyqd-vtudev2km2tq',id=16,image_ref='ab2dea70-7375-4e2d-beda-90f19a5ec15e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-10T20:19:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bdbf0582-1c35-4e4b-a8ab-6f9a15ce9dda'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e773c65970c34c9db154c6fea65d9fa4',ramdisk_id='',reservation_id='r-1svzys4w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ab2dea70-7375-4e2d-beda-90f19a5ec15e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1355872434',owner_user_name='tempest-PrometheusGabbiTest-1355872434-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-10T20:19:03Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='639468767e8f48a1bd0e3dac90a0ec47',uuid=cc1e9e66-56af-4162-a89f-c97758ee1a64,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.555 189283 DEBUG nova.network.os_vif_util [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Converting VIF {"id": "191db221-f5ea-4b4e-aa90-70dca09235b1", "address": "fa:16:3e:fb:91:03", "network": {"id": "5861e367-6dd6-4128-97c5-6a0449548387", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e773c65970c34c9db154c6fea65d9fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap191db221-f5", "ovs_interfaceid": "191db221-f5ea-4b4e-aa90-70dca09235b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 10 20:29:05 compute-0 podman[257205]: 2025-12-10 20:29:05.556037658 +0000 UTC m=+0.111990068 container cleanup 044e6b6451c9351fa67996bfc04bdb41690998f783eaccb280189ffde47ae5f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.556 189283 DEBUG nova.network.os_vif_util [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:91:03,bridge_name='br-int',has_traffic_filtering=True,id=191db221-f5ea-4b4e-aa90-70dca09235b1,network=Network(5861e367-6dd6-4128-97c5-6a0449548387),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap191db221-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.556 189283 DEBUG os_vif [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:91:03,bridge_name='br-int',has_traffic_filtering=True,id=191db221-f5ea-4b4e-aa90-70dca09235b1,network=Network(5861e367-6dd6-4128-97c5-6a0449548387),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap191db221-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.557 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.558 189283 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap191db221-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.559 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.562 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.564 189283 INFO os_vif [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:91:03,bridge_name='br-int',has_traffic_filtering=True,id=191db221-f5ea-4b4e-aa90-70dca09235b1,network=Network(5861e367-6dd6-4128-97c5-6a0449548387),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap191db221-f5')
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.564 189283 INFO nova.virt.libvirt.driver [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Deleting instance files /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64_del
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.565 189283 INFO nova.virt.libvirt.driver [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Deletion of /var/lib/nova/instances/cc1e9e66-56af-4162-a89f-c97758ee1a64_del complete
Dec 10 20:29:05 compute-0 systemd[1]: libpod-conmon-044e6b6451c9351fa67996bfc04bdb41690998f783eaccb280189ffde47ae5f1.scope: Deactivated successfully.
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.618 189283 INFO nova.compute.manager [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Took 0.35 seconds to destroy the instance on the hypervisor.
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.619 189283 DEBUG oslo.service.loopingcall [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.619 189283 DEBUG nova.compute.manager [-] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.620 189283 DEBUG nova.network.neutron [-] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.648 189283 DEBUG nova.compute.manager [req-bcd0d3e7-405d-456e-9679-868865332462 req-8254b655-bb27-48d4-8547-331e5929f82c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Received event network-vif-unplugged-191db221-f5ea-4b4e-aa90-70dca09235b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.649 189283 DEBUG oslo_concurrency.lockutils [req-bcd0d3e7-405d-456e-9679-868865332462 req-8254b655-bb27-48d4-8547-331e5929f82c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:29:05 compute-0 podman[257251]: 2025-12-10 20:29:05.649471725 +0000 UTC m=+0.060136970 container remove 044e6b6451c9351fa67996bfc04bdb41690998f783eaccb280189ffde47ae5f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.649 189283 DEBUG oslo_concurrency.lockutils [req-bcd0d3e7-405d-456e-9679-868865332462 req-8254b655-bb27-48d4-8547-331e5929f82c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.649 189283 DEBUG oslo_concurrency.lockutils [req-bcd0d3e7-405d-456e-9679-868865332462 req-8254b655-bb27-48d4-8547-331e5929f82c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.649 189283 DEBUG nova.compute.manager [req-bcd0d3e7-405d-456e-9679-868865332462 req-8254b655-bb27-48d4-8547-331e5929f82c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] No waiting events found dispatching network-vif-unplugged-191db221-f5ea-4b4e-aa90-70dca09235b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.650 189283 DEBUG nova.compute.manager [req-bcd0d3e7-405d-456e-9679-868865332462 req-8254b655-bb27-48d4-8547-331e5929f82c 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Received event network-vif-unplugged-191db221-f5ea-4b4e-aa90-70dca09235b1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 10 20:29:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:05.656 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c0afba-e2b0-4019-b332-98c12fb6ee11]: (4, ('Wed Dec 10 08:29:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387 (044e6b6451c9351fa67996bfc04bdb41690998f783eaccb280189ffde47ae5f1)\n044e6b6451c9351fa67996bfc04bdb41690998f783eaccb280189ffde47ae5f1\nWed Dec 10 08:29:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387 (044e6b6451c9351fa67996bfc04bdb41690998f783eaccb280189ffde47ae5f1)\n044e6b6451c9351fa67996bfc04bdb41690998f783eaccb280189ffde47ae5f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:29:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:05.658 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[1b74abc2-7202-4dc4-8cec-9defcd7dac6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:29:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:05.659 106564 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5861e367-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.660 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:05 compute-0 kernel: tap5861e367-60: left promiscuous mode
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.663 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:05.667 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[06709fe4-1aed-4bf6-8224-b29162cdb5e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:29:05 compute-0 nova_compute[189279]: 2025-12-10 20:29:05.680 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:05.689 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[92e71295-e647-4c21-ace3-f95028cee098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:29:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:05.690 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[6280c39c-4be1-4002-aace-c453cfa82631]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:29:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:05.705 239384 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c17290-26b7-41eb-b01a-1ce5e7f25b1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499811, 'reachable_time': 43915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257265, 'error': None, 'target': 'ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:29:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:05.708 106676 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5861e367-6dd6-4128-97c5-6a0449548387 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 10 20:29:05 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:05.708 106676 DEBUG oslo.privsep.daemon [-] privsep: reply[eb70c400-9a4b-4181-b509-bb948da81dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 10 20:29:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d5861e367\x2d6dd6\x2d4128\x2d97c5\x2d6a0449548387.mount: Deactivated successfully.
Dec 10 20:29:06 compute-0 nova_compute[189279]: 2025-12-10 20:29:06.157 189283 DEBUG nova.network.neutron [-] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 10 20:29:06 compute-0 nova_compute[189279]: 2025-12-10 20:29:06.170 189283 INFO nova.compute.manager [-] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Took 0.55 seconds to deallocate network for instance.
Dec 10 20:29:06 compute-0 nova_compute[189279]: 2025-12-10 20:29:06.215 189283 DEBUG oslo_concurrency.lockutils [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:29:06 compute-0 nova_compute[189279]: 2025-12-10 20:29:06.216 189283 DEBUG oslo_concurrency.lockutils [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:29:06 compute-0 nova_compute[189279]: 2025-12-10 20:29:06.266 189283 DEBUG nova.compute.provider_tree [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:29:06 compute-0 nova_compute[189279]: 2025-12-10 20:29:06.279 189283 DEBUG nova.scheduler.client.report [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:29:06 compute-0 nova_compute[189279]: 2025-12-10 20:29:06.304 189283 DEBUG oslo_concurrency.lockutils [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:29:06 compute-0 nova_compute[189279]: 2025-12-10 20:29:06.324 189283 INFO nova.scheduler.client.report [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Deleted allocations for instance cc1e9e66-56af-4162-a89f-c97758ee1a64
Dec 10 20:29:06 compute-0 nova_compute[189279]: 2025-12-10 20:29:06.388 189283 DEBUG oslo_concurrency.lockutils [None req-ad41026f-4af5-485f-8e65-53d8834bec96 639468767e8f48a1bd0e3dac90a0ec47 e773c65970c34c9db154c6fea65d9fa4 - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:29:07 compute-0 podman[257266]: 2025-12-10 20:29:07.072592162 +0000 UTC m=+0.055723473 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 10 20:29:07 compute-0 podman[257267]: 2025-12-10 20:29:07.083695751 +0000 UTC m=+0.065741122 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_id=edpm, name=ubi9-minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Dec 10 20:29:07 compute-0 nova_compute[189279]: 2025-12-10 20:29:07.522 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:07 compute-0 nova_compute[189279]: 2025-12-10 20:29:07.751 189283 DEBUG nova.compute.manager [req-0d469e87-8afd-48d7-812d-b176444a79d6 req-f7b40fc9-2cb3-4921-94ef-82bf9f9e0c20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Received event network-vif-plugged-191db221-f5ea-4b4e-aa90-70dca09235b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:29:07 compute-0 nova_compute[189279]: 2025-12-10 20:29:07.752 189283 DEBUG oslo_concurrency.lockutils [req-0d469e87-8afd-48d7-812d-b176444a79d6 req-f7b40fc9-2cb3-4921-94ef-82bf9f9e0c20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Acquiring lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:29:07 compute-0 nova_compute[189279]: 2025-12-10 20:29:07.752 189283 DEBUG oslo_concurrency.lockutils [req-0d469e87-8afd-48d7-812d-b176444a79d6 req-f7b40fc9-2cb3-4921-94ef-82bf9f9e0c20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:29:07 compute-0 nova_compute[189279]: 2025-12-10 20:29:07.752 189283 DEBUG oslo_concurrency.lockutils [req-0d469e87-8afd-48d7-812d-b176444a79d6 req-f7b40fc9-2cb3-4921-94ef-82bf9f9e0c20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] Lock "cc1e9e66-56af-4162-a89f-c97758ee1a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:29:07 compute-0 nova_compute[189279]: 2025-12-10 20:29:07.752 189283 DEBUG nova.compute.manager [req-0d469e87-8afd-48d7-812d-b176444a79d6 req-f7b40fc9-2cb3-4921-94ef-82bf9f9e0c20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] No waiting events found dispatching network-vif-plugged-191db221-f5ea-4b4e-aa90-70dca09235b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 10 20:29:07 compute-0 nova_compute[189279]: 2025-12-10 20:29:07.753 189283 WARNING nova.compute.manager [req-0d469e87-8afd-48d7-812d-b176444a79d6 req-f7b40fc9-2cb3-4921-94ef-82bf9f9e0c20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Received unexpected event network-vif-plugged-191db221-f5ea-4b4e-aa90-70dca09235b1 for instance with vm_state deleted and task_state None.
Dec 10 20:29:07 compute-0 nova_compute[189279]: 2025-12-10 20:29:07.753 189283 DEBUG nova.compute.manager [req-0d469e87-8afd-48d7-812d-b176444a79d6 req-f7b40fc9-2cb3-4921-94ef-82bf9f9e0c20 69e03c0b55e849b5aae25fb4429b0003 37a105bb770c4f50a1aa32a517b677cf - - default default] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Received event network-vif-deleted-191db221-f5ea-4b4e-aa90-70dca09235b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 10 20:29:10 compute-0 nova_compute[189279]: 2025-12-10 20:29:10.160 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765398535.1588793, ca7daa1b-94a2-4e08-902b-73be0ab83974 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:29:10 compute-0 nova_compute[189279]: 2025-12-10 20:29:10.161 189283 INFO nova.compute.manager [-] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] VM Stopped (Lifecycle Event)
Dec 10 20:29:10 compute-0 nova_compute[189279]: 2025-12-10 20:29:10.177 189283 DEBUG nova.compute.manager [None req-3292b4e3-b998-4279-b3df-f7514e1332f5 - - - - - -] [instance: ca7daa1b-94a2-4e08-902b-73be0ab83974] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:29:10 compute-0 nova_compute[189279]: 2025-12-10 20:29:10.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:29:10 compute-0 nova_compute[189279]: 2025-12-10 20:29:10.561 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:12 compute-0 nova_compute[189279]: 2025-12-10 20:29:12.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:29:12 compute-0 nova_compute[189279]: 2025-12-10 20:29:12.527 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:14 compute-0 nova_compute[189279]: 2025-12-10 20:29:14.601 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:15 compute-0 podman[257309]: 2025-12-10 20:29:15.13088518 +0000 UTC m=+0.094573609 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_id=edpm, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Dec 10 20:29:15 compute-0 podman[257307]: 2025-12-10 20:29:15.156916831 +0000 UTC m=+0.119619553 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:29:15 compute-0 podman[257308]: 2025-12-10 20:29:15.158644728 +0000 UTC m=+0.125334257 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 10 20:29:15 compute-0 nova_compute[189279]: 2025-12-10 20:29:15.565 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:17 compute-0 nova_compute[189279]: 2025-12-10 20:29:17.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:29:17 compute-0 nova_compute[189279]: 2025-12-10 20:29:17.530 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:18 compute-0 nova_compute[189279]: 2025-12-10 20:29:18.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:29:18 compute-0 nova_compute[189279]: 2025-12-10 20:29:18.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:29:18 compute-0 nova_compute[189279]: 2025-12-10 20:29:18.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:29:18 compute-0 nova_compute[189279]: 2025-12-10 20:29:18.517 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:29:18 compute-0 nova_compute[189279]: 2025-12-10 20:29:18.518 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:29:18 compute-0 nova_compute[189279]: 2025-12-10 20:29:18.518 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:29:18 compute-0 nova_compute[189279]: 2025-12-10 20:29:18.519 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:29:18 compute-0 podman[257363]: 2025-12-10 20:29:18.688518347 +0000 UTC m=+0.092890913 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 10 20:29:18 compute-0 podman[257362]: 2025-12-10 20:29:18.728500295 +0000 UTC m=+0.135191954 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:29:18 compute-0 podman[257361]: 2025-12-10 20:29:18.746023526 +0000 UTC m=+0.147636517 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 10 20:29:18 compute-0 nova_compute[189279]: 2025-12-10 20:29:18.897 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:29:18 compute-0 nova_compute[189279]: 2025-12-10 20:29:18.899 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5330MB free_disk=72.2906379699707GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:29:18 compute-0 nova_compute[189279]: 2025-12-10 20:29:18.900 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:29:18 compute-0 nova_compute[189279]: 2025-12-10 20:29:18.900 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:29:18 compute-0 nova_compute[189279]: 2025-12-10 20:29:18.973 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:29:18 compute-0 nova_compute[189279]: 2025-12-10 20:29:18.974 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:29:18 compute-0 nova_compute[189279]: 2025-12-10 20:29:18.998 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:29:19 compute-0 nova_compute[189279]: 2025-12-10 20:29:19.025 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:29:19 compute-0 nova_compute[189279]: 2025-12-10 20:29:19.057 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:29:19 compute-0 nova_compute[189279]: 2025-12-10 20:29:19.058 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:29:20 compute-0 nova_compute[189279]: 2025-12-10 20:29:20.527 189283 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765398545.5244813, cc1e9e66-56af-4162-a89f-c97758ee1a64 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 10 20:29:20 compute-0 nova_compute[189279]: 2025-12-10 20:29:20.527 189283 INFO nova.compute.manager [-] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] VM Stopped (Lifecycle Event)
Dec 10 20:29:20 compute-0 nova_compute[189279]: 2025-12-10 20:29:20.549 189283 DEBUG nova.compute.manager [None req-bf4afefd-4544-4387-9c5b-3efa5916586c - - - - - -] [instance: cc1e9e66-56af-4162-a89f-c97758ee1a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 10 20:29:20 compute-0 nova_compute[189279]: 2025-12-10 20:29:20.568 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:21 compute-0 nova_compute[189279]: 2025-12-10 20:29:21.057 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:29:21 compute-0 nova_compute[189279]: 2025-12-10 20:29:21.058 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:29:21 compute-0 nova_compute[189279]: 2025-12-10 20:29:21.059 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:29:21 compute-0 nova_compute[189279]: 2025-12-10 20:29:21.074 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 10 20:29:21 compute-0 nova_compute[189279]: 2025-12-10 20:29:21.075 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:29:21 compute-0 nova_compute[189279]: 2025-12-10 20:29:21.077 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:29:22 compute-0 nova_compute[189279]: 2025-12-10 20:29:22.531 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:23.420 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:29:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:23.420 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:29:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:29:23.421 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:29:25 compute-0 nova_compute[189279]: 2025-12-10 20:29:25.490 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:29:25 compute-0 nova_compute[189279]: 2025-12-10 20:29:25.572 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:26 compute-0 podman[257427]: 2025-12-10 20:29:26.145658661 +0000 UTC m=+0.118431691 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 10 20:29:27 compute-0 nova_compute[189279]: 2025-12-10 20:29:27.535 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:28 compute-0 nova_compute[189279]: 2025-12-10 20:29:28.483 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:29:29 compute-0 podman[203484]: time="2025-12-10T20:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:29:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 20:29:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4346 "" "Go-http-client/1.1"
Dec 10 20:29:30 compute-0 nova_compute[189279]: 2025-12-10 20:29:30.577 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:31 compute-0 openstack_network_exporter[205632]: ERROR   20:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:29:31 compute-0 openstack_network_exporter[205632]: ERROR   20:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:29:31 compute-0 openstack_network_exporter[205632]: ERROR   20:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:29:31 compute-0 openstack_network_exporter[205632]: ERROR   20:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:29:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:29:31 compute-0 openstack_network_exporter[205632]: ERROR   20:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:29:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:29:32 compute-0 nova_compute[189279]: 2025-12-10 20:29:32.536 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:35 compute-0 nova_compute[189279]: 2025-12-10 20:29:35.582 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:37 compute-0 nova_compute[189279]: 2025-12-10 20:29:37.542 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:38 compute-0 podman[257448]: 2025-12-10 20:29:38.096865549 +0000 UTC m=+0.069843072 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:29:38 compute-0 podman[257449]: 2025-12-10 20:29:38.160403861 +0000 UTC m=+0.117399043 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., container_name=openstack_network_exporter)
Dec 10 20:29:40 compute-0 nova_compute[189279]: 2025-12-10 20:29:40.586 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.189 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.190 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.190 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcaa2bd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.191 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd2840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.192 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a740e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a741a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.193 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.193 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcaa2bd1ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.194 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcaa2bd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.195 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcaa1a74050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.195 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.194 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcaa1a740b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.196 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcaa2beee10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.196 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcaa1a74170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.196 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcaa1a74200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.196 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcaa2bd3a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.197 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcaa2bd1c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.197 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.196 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.198 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2c24b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcaa2bd3aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.199 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcaa2caae70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.199 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.199 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.200 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a743b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.200 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.201 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.201 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2ccf590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcaa1a742f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.202 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcaa1a74380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.202 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.202 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcaa1a74410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.203 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.203 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.204 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcaa2bd3560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.205 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcaa2bd1a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.204 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.205 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.206 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcaa2bd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.206 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.205 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'cpu': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.206 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcaa2bd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.207 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.207 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcaa2bd3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.207 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa1a74710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'cpu': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.207 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.208 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcaa2bd36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.208 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'cpu': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.208 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.209 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcaa1a746e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.209 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd37a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'cpu': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.209 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcaa2bd3710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.211 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.211 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcaa2bd3770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.211 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.210 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'cpu': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.211 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcaa2bd3fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcaa182cbc0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'disk.device.allocation': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'cpu': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.212 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcaa2bd3b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.212 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.212 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcaa2bd3da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcaa4105f70>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.212 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 ceilometer_agent_compute[200029]: 2025-12-10 20:29:42.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 10 20:29:42 compute-0 nova_compute[189279]: 2025-12-10 20:29:42.543 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:45 compute-0 nova_compute[189279]: 2025-12-10 20:29:45.590 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:46 compute-0 podman[257494]: 2025-12-10 20:29:46.100161835 +0000 UTC m=+0.068311032 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, container_name=kepler, io.buildah.version=1.29.0, distribution-scope=public, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1214.1726694543, version=9.4)
Dec 10 20:29:46 compute-0 podman[257493]: 2025-12-10 20:29:46.114321106 +0000 UTC m=+0.086968084 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 10 20:29:46 compute-0 podman[257492]: 2025-12-10 20:29:46.115998381 +0000 UTC m=+0.093460868 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 10 20:29:47 compute-0 nova_compute[189279]: 2025-12-10 20:29:47.547 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:49 compute-0 podman[257547]: 2025-12-10 20:29:49.147671192 +0000 UTC m=+0.124447914 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Dec 10 20:29:49 compute-0 podman[257548]: 2025-12-10 20:29:49.162161452 +0000 UTC m=+0.127481845 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 10 20:29:49 compute-0 podman[257546]: 2025-12-10 20:29:49.17880003 +0000 UTC m=+0.151802401 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 10 20:29:50 compute-0 nova_compute[189279]: 2025-12-10 20:29:50.594 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:52 compute-0 nova_compute[189279]: 2025-12-10 20:29:52.547 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:55 compute-0 nova_compute[189279]: 2025-12-10 20:29:55.597 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:56 compute-0 ovn_controller[97701]: 2025-12-10T20:29:56Z|00227|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Dec 10 20:29:57 compute-0 podman[257613]: 2025-12-10 20:29:57.122286125 +0000 UTC m=+0.105041230 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251210, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 10 20:29:57 compute-0 nova_compute[189279]: 2025-12-10 20:29:57.550 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:29:59 compute-0 podman[203484]: time="2025-12-10T20:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:29:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 20:29:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4349 "" "Go-http-client/1.1"
Dec 10 20:30:00 compute-0 nova_compute[189279]: 2025-12-10 20:30:00.601 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:01 compute-0 openstack_network_exporter[205632]: ERROR   20:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:30:01 compute-0 openstack_network_exporter[205632]: ERROR   20:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:30:01 compute-0 openstack_network_exporter[205632]: ERROR   20:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:30:01 compute-0 openstack_network_exporter[205632]: ERROR   20:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:30:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:30:01 compute-0 openstack_network_exporter[205632]: ERROR   20:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:30:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:30:02 compute-0 nova_compute[189279]: 2025-12-10 20:30:02.554 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:05 compute-0 nova_compute[189279]: 2025-12-10 20:30:05.603 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:07 compute-0 nova_compute[189279]: 2025-12-10 20:30:07.557 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:09 compute-0 podman[257635]: 2025-12-10 20:30:09.095608499 +0000 UTC m=+0.072144205 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.buildah.version=1.33.7, release=1755695350)
Dec 10 20:30:09 compute-0 podman[257634]: 2025-12-10 20:30:09.12054975 +0000 UTC m=+0.095908674 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 10 20:30:10 compute-0 nova_compute[189279]: 2025-12-10 20:30:10.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:30:10 compute-0 nova_compute[189279]: 2025-12-10 20:30:10.607 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:12 compute-0 nova_compute[189279]: 2025-12-10 20:30:12.560 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:13 compute-0 nova_compute[189279]: 2025-12-10 20:30:13.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:30:15 compute-0 nova_compute[189279]: 2025-12-10 20:30:15.611 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:17 compute-0 podman[257678]: 2025-12-10 20:30:17.126782897 +0000 UTC m=+0.101418593 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 10 20:30:17 compute-0 podman[257679]: 2025-12-10 20:30:17.138490602 +0000 UTC m=+0.107599619 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 10 20:30:17 compute-0 podman[257680]: 2025-12-10 20:30:17.164994166 +0000 UTC m=+0.127686750 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, io.openshift.expose-services=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, architecture=x86_64, distribution-scope=public, vcs-type=git, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 10 20:30:17 compute-0 nova_compute[189279]: 2025-12-10 20:30:17.563 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:18 compute-0 nova_compute[189279]: 2025-12-10 20:30:18.482 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:30:18 compute-0 nova_compute[189279]: 2025-12-10 20:30:18.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:30:18 compute-0 nova_compute[189279]: 2025-12-10 20:30:18.487 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 10 20:30:20 compute-0 podman[257735]: 2025-12-10 20:30:20.101057699 +0000 UTC m=+0.083215913 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 10 20:30:20 compute-0 podman[257736]: 2025-12-10 20:30:20.120525184 +0000 UTC m=+0.101612969 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 10 20:30:20 compute-0 podman[257734]: 2025-12-10 20:30:20.147701416 +0000 UTC m=+0.129351576 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec 10 20:30:20 compute-0 nova_compute[189279]: 2025-12-10 20:30:20.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:30:20 compute-0 nova_compute[189279]: 2025-12-10 20:30:20.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:30:20 compute-0 nova_compute[189279]: 2025-12-10 20:30:20.529 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:30:20 compute-0 nova_compute[189279]: 2025-12-10 20:30:20.529 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:30:20 compute-0 nova_compute[189279]: 2025-12-10 20:30:20.529 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:30:20 compute-0 nova_compute[189279]: 2025-12-10 20:30:20.529 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 10 20:30:20 compute-0 nova_compute[189279]: 2025-12-10 20:30:20.613 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:20 compute-0 nova_compute[189279]: 2025-12-10 20:30:20.832 189283 WARNING nova.virt.libvirt.driver [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 10 20:30:20 compute-0 nova_compute[189279]: 2025-12-10 20:30:20.834 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5357MB free_disk=72.29061889648438GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 10 20:30:20 compute-0 nova_compute[189279]: 2025-12-10 20:30:20.834 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:30:20 compute-0 nova_compute[189279]: 2025-12-10 20:30:20.835 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:30:21 compute-0 nova_compute[189279]: 2025-12-10 20:30:21.077 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 10 20:30:21 compute-0 nova_compute[189279]: 2025-12-10 20:30:21.078 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 10 20:30:21 compute-0 nova_compute[189279]: 2025-12-10 20:30:21.093 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing inventories for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 10 20:30:21 compute-0 nova_compute[189279]: 2025-12-10 20:30:21.159 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating ProviderTree inventory for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 10 20:30:21 compute-0 nova_compute[189279]: 2025-12-10 20:30:21.160 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Updating inventory in ProviderTree for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 10 20:30:21 compute-0 nova_compute[189279]: 2025-12-10 20:30:21.177 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing aggregate associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 10 20:30:21 compute-0 nova_compute[189279]: 2025-12-10 20:30:21.203 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Refreshing trait associations for resource provider fc709657-cb59-4c0b-8f54-5be8a24ab091, traits: COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 10 20:30:21 compute-0 nova_compute[189279]: 2025-12-10 20:30:21.227 189283 DEBUG nova.compute.provider_tree [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed in ProviderTree for provider: fc709657-cb59-4c0b-8f54-5be8a24ab091 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 10 20:30:21 compute-0 nova_compute[189279]: 2025-12-10 20:30:21.248 189283 DEBUG nova.scheduler.client.report [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Inventory has not changed for provider fc709657-cb59-4c0b-8f54-5be8a24ab091 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 10 20:30:21 compute-0 nova_compute[189279]: 2025-12-10 20:30:21.251 189283 DEBUG nova.compute.resource_tracker [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 10 20:30:21 compute-0 nova_compute[189279]: 2025-12-10 20:30:21.252 189283 DEBUG oslo_concurrency.lockutils [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:30:22 compute-0 nova_compute[189279]: 2025-12-10 20:30:22.252 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:30:22 compute-0 nova_compute[189279]: 2025-12-10 20:30:22.253 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 10 20:30:22 compute-0 nova_compute[189279]: 2025-12-10 20:30:22.254 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 10 20:30:22 compute-0 nova_compute[189279]: 2025-12-10 20:30:22.271 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 10 20:30:22 compute-0 nova_compute[189279]: 2025-12-10 20:30:22.271 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:30:22 compute-0 nova_compute[189279]: 2025-12-10 20:30:22.566 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:30:23.421 106564 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 10 20:30:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:30:23.422 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 10 20:30:23 compute-0 ovn_metadata_agent[106559]: 2025-12-10 20:30:23.422 106564 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 10 20:30:23 compute-0 nova_compute[189279]: 2025-12-10 20:30:23.487 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:30:23 compute-0 nova_compute[189279]: 2025-12-10 20:30:23.488 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 10 20:30:25 compute-0 nova_compute[189279]: 2025-12-10 20:30:25.617 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:26 compute-0 nova_compute[189279]: 2025-12-10 20:30:26.505 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:30:27 compute-0 nova_compute[189279]: 2025-12-10 20:30:27.567 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:28 compute-0 podman[257799]: 2025-12-10 20:30:28.085990971 +0000 UTC m=+0.064734705 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251210, org.label-schema.vendor=CentOS, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d)
Dec 10 20:30:29 compute-0 podman[203484]: time="2025-12-10T20:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:30:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 20:30:29 compute-0 podman[203484]: @ - - [10/Dec/2025:20:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4352 "" "Go-http-client/1.1"
Dec 10 20:30:30 compute-0 nova_compute[189279]: 2025-12-10 20:30:30.620 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:31 compute-0 openstack_network_exporter[205632]: ERROR   20:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:30:31 compute-0 openstack_network_exporter[205632]: ERROR   20:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:30:31 compute-0 openstack_network_exporter[205632]: ERROR   20:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:30:31 compute-0 openstack_network_exporter[205632]: ERROR   20:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:30:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:30:31 compute-0 openstack_network_exporter[205632]: ERROR   20:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:30:31 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:30:32 compute-0 nova_compute[189279]: 2025-12-10 20:30:32.570 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:35 compute-0 nova_compute[189279]: 2025-12-10 20:30:35.623 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:37 compute-0 nova_compute[189279]: 2025-12-10 20:30:37.572 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:39 compute-0 nova_compute[189279]: 2025-12-10 20:30:39.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:30:39 compute-0 nova_compute[189279]: 2025-12-10 20:30:39.489 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 10 20:30:39 compute-0 nova_compute[189279]: 2025-12-10 20:30:39.528 189283 DEBUG nova.compute.manager [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 10 20:30:40 compute-0 podman[257818]: 2025-12-10 20:30:40.107205844 +0000 UTC m=+0.089396399 container health_status 22080f5472ec23b83be94637655ef1f96c442b40ec0120123c445edf0e14980f (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 10 20:30:40 compute-0 podman[257819]: 2025-12-10 20:30:40.131637242 +0000 UTC m=+0.101966338 container health_status d29389f25b6b789f0eb2f1e8d4c6aa0d6ec435234155a2320a603a023ab540d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec 10 20:30:40 compute-0 nova_compute[189279]: 2025-12-10 20:30:40.627 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:42 compute-0 nova_compute[189279]: 2025-12-10 20:30:42.488 189283 DEBUG oslo_service.periodic_task [None req-cf43430a-9412-43ac-adb5-a08dbe4a4595 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 10 20:30:42 compute-0 nova_compute[189279]: 2025-12-10 20:30:42.575 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:45 compute-0 nova_compute[189279]: 2025-12-10 20:30:45.631 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:46 compute-0 sshd-session[257860]: Accepted publickey for zuul from 192.168.122.10 port 38672 ssh2: ECDSA SHA256:ojftEmhaknQ1KrnCQMFFHRKzVj7lUm3Qj3lcG3oQZSI
Dec 10 20:30:46 compute-0 systemd-logind[789]: New session 32 of user zuul.
Dec 10 20:30:46 compute-0 systemd[1]: Started Session 32 of User zuul.
Dec 10 20:30:46 compute-0 sshd-session[257860]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 10 20:30:46 compute-0 sudo[257864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 10 20:30:46 compute-0 sudo[257864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 10 20:30:47 compute-0 nova_compute[189279]: 2025-12-10 20:30:47.577 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:48 compute-0 podman[257903]: 2025-12-10 20:30:48.142539944 +0000 UTC m=+0.120637540 container health_status ffb291adddf8400e9b3ea6fa3c67389195e25cc3d1e3b4d869520fda11030854 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.expose-services=, maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, architecture=x86_64, config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 10 20:30:48 compute-0 podman[257902]: 2025-12-10 20:30:48.143376787 +0000 UTC m=+0.125667706 container health_status e3649c12b6e2d58f960140cea97f21ffd7a1569278af75ffe19f160aab0a5953 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi)
Dec 10 20:30:48 compute-0 podman[257898]: 2025-12-10 20:30:48.149240655 +0000 UTC m=+0.136598010 container health_status 6af3bbb7a768dd5e0e8f96288b44cecf8603847fd9832c3790b67b71b1d57e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 10 20:30:50 compute-0 nova_compute[189279]: 2025-12-10 20:30:50.634 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:51 compute-0 podman[258068]: 2025-12-10 20:30:51.124665048 +0000 UTC m=+0.080788526 container health_status e73ec138547eaefad87aa2a9eb24a0b5772ae86c490783d6e3e8442a9e4d4b56 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 10 20:30:51 compute-0 podman[258064]: 2025-12-10 20:30:51.14253616 +0000 UTC m=+0.104116775 container health_status b8f820d325180691277bbea47ca55e25aa995f124881922fde02d2c0ee77c0c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 10 20:30:51 compute-0 podman[258063]: 2025-12-10 20:30:51.161969814 +0000 UTC m=+0.129983473 container health_status 9cc0e1d39b9998b5d920dc3d0ddb7e1d63fe465a9fb4ebfe9907bdcd5a9e7b17 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 10 20:30:52 compute-0 ovs-vsctl[258155]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 10 20:30:52 compute-0 nova_compute[189279]: 2025-12-10 20:30:52.579 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:52 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 257888 (sos)
Dec 10 20:30:52 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 10 20:30:52 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 10 20:30:53 compute-0 virtqemud[188902]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 10 20:30:53 compute-0 virtqemud[188902]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 10 20:30:53 compute-0 virtqemud[188902]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 10 20:30:54 compute-0 crontab[258581]: (root) LIST (root)
Dec 10 20:30:55 compute-0 nova_compute[189279]: 2025-12-10 20:30:55.637 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:56 compute-0 systemd[1]: Starting Hostname Service...
Dec 10 20:30:56 compute-0 systemd[1]: Started Hostname Service.
Dec 10 20:30:57 compute-0 nova_compute[189279]: 2025-12-10 20:30:57.581 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:30:58 compute-0 podman[258836]: 2025-12-10 20:30:58.566253344 +0000 UTC m=+0.081144187 container health_status 84f69dff8366813b88a9e542c40e9518814da4ebf3c0ce461d1b8484836897b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251210, tcib_build_tag=606b2c89ad911cb84d5fd44fd47bc74d, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 10 20:30:59 compute-0 podman[203484]: time="2025-12-10T20:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 10 20:30:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 10 20:30:59 compute-0 podman[203484]: @ - - [10/Dec/2025:20:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4346 "" "Go-http-client/1.1"
Dec 10 20:31:00 compute-0 nova_compute[189279]: 2025-12-10 20:31:00.641 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:31:01 compute-0 openstack_network_exporter[205632]: ERROR   20:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:31:01 compute-0 openstack_network_exporter[205632]: ERROR   20:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 10 20:31:01 compute-0 openstack_network_exporter[205632]: ERROR   20:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 10 20:31:01 compute-0 openstack_network_exporter[205632]: ERROR   20:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 10 20:31:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:31:01 compute-0 openstack_network_exporter[205632]: ERROR   20:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 10 20:31:01 compute-0 openstack_network_exporter[205632]: 
Dec 10 20:31:02 compute-0 nova_compute[189279]: 2025-12-10 20:31:02.585 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 10 20:31:05 compute-0 ovs-appctl[259905]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 10 20:31:05 compute-0 ovs-appctl[259912]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 10 20:31:05 compute-0 ovs-appctl[259916]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 10 20:31:05 compute-0 nova_compute[189279]: 2025-12-10 20:31:05.643 189283 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
